| ||||||||||||||||||||||||||||||||
August 2014 Industry trend and perspectivesThe following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes. StorageIO comments and perspectives in the newsVirtual Desktop Infrastructures (VDI) remains a popular industry and IT customer topic, not to mention being one of the favorite themes of Solid State Device (SSD) vendors. SSD component and system solution vendors along with their supporters love VDI as the by-product of aggregation (e.g. consolidation) which applies to VDI is aggravation. Aggravation is the result of increased storage I/O performance (IOP’s, bandwidth, response time) from consolidating the various desktops. It should not be a surprise that some of the biggest fans encouraging organizations to adopt VDI are the SSD vendors. Read some of my comments and perspectives on VDI here at FedTech Magazine. Speaking of virtualizing the data center, software defined data centers (SDDC) along with software defined networking (SDN) and software defined storage (SDS) remain popular including some software defined marketing (SDM). Here are some of my comments and perspectives moving beyond the hype of SDDC. Recently the Fibre Channel Industry Association (FCIA) who works with the T11 standards body of both legacy or classic Fibre Channel (FC) as well as newer FC over Ethernet (FCoE) made some announcements. These announcements including enhancements such as Fibre Channel Back Bone version 6 (FC-BB-6) among others. Both FC and FCoE are alive and doing well, granted one has been around longer (FC) and can be seen at its plateau while the other (FCoE) continues to evolve and grow in adoption. In some ways, FCoE is in a similar role today to where FC was in the late 90s and early 2000s ironically facing some common fud. You can read my comments here as part of a quote in support of the announcement , along with my more industry trend perspectives in this blog post here. Buyers guides are popular with both vendors, VAR’s as well as IT organizations (e.g. customers) following are some of my comments and industry trend perspectives appearing in Enterprise Storage Forum. Here are perspectives on buyers guides for Enterprise File Sync and Share (EFSS), Unified Data Storage and Object Storage. EMC has come under pressure as mentioned in earlier StorageIO update newsletters to increase its shareholder benefit including spin-off of VMware. Here are some of my comments and perspectives that appeared in CruxialCIO. Read more industry trends perspectives comments on the StorageIO news page. StorageIO video and audio pod casts
StorageIOblog posts and perspectivesDespite being declared dead, traditional or classic Fibre Channel (FC) along with FC over Ethernet (FCoE) continues to evolve with FC-BB-6, read more here. VMworld 2014 took place this past week and included announcements about EVO:Rack and Rail (more on this in a future edition). You can get started learning about EVO:Rack and RAIL at Duncan Epping (aka @DuncanYB) Yellow Bricks site. VMware Virtual SAN (VSAN) is at the heart of EVO which you can read an overview here in this earlier StorageIO update newsletter (March 2014). Also watch for some extra content that I’m working on including some video podcasts articles and blog posts from my trip to VMworld 2014. However one of the themes in the background of VMworld 2014 is the current beta of VMware vSphere V6 along with Virtual Volumes aka VVOL’s. The following are a couple of my recent posts including primer overview of VVOL’s along with a poll you can cast your vote. Check out Are VMware VVOL’s in your virtual server and storage I/O future? and VMware VVOL’s and storage I/O fundamentals (Part 1) along with (Part 2). | ||||||||||||||||||||||||||||||||
StorageIO events and activities
The StorageIO calendar continues to evolve including several new events being added for September and well into the fall with more in the works including upcoming Dutch European sessions the week of October 6th in Nijkerk Holland (learn more here). The following are some upcoming September events. These include live in-person seminars, conferences, keynote and speaking activities as well as on-line webinars, twitter chats, Google+ hangouts among others.
Note: Dates, times, venues and subject contents subject to change, refer to events page for current status Click here to view other upcoming along with earlier event activities. Watch for more 2014 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, software defined, big data, little data, cloud and object storage, performance and management trends among others. Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events. | ||||||||||||||||||||||||||||||||
Server and StorageIO Technology Tips and Tools
In addition to the industry trends and perspectives comments in the news mentioned above, along with the StorageIO blog posts, the following are some of my recent articles and tips that have appeared in various industry venues. Over at the new Storage Acceleration site I have a couple of pieces, the first is What, When, Why & How to Accelerate Storage and the other is Tips for Measuring Your Storage Acceleration. | ||||||||||||||||||||||||||||||||
StorageIO Update Newsletter ArchivesClick here to view earlier StorageIO Update newsletters (HTML and PDF versions) at www.storageio.com/newsletter. Subscribe to this newsletter (and pass it along) by clicking here (Via Secure Campaigner site). View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter | ||||||||||||||||||||||||||||||||
Ok, nuff said (for now) Cheers gs Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved |
VMware VVOLs and storage I/O fundementals (Part 2)
VMware VVOL’s and storage I/O fundamentals (Part II)
Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g.VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).
Picking up from where we left off in the first part of the VMware VVOL’s and storage I/O fundamentals, lets take a closer look at VVOL’s.
First however lets be clear that while VMware uses terms including object and object storage in the context of VVOL’s, its not the same as some other object storage solutions. Learn more about object storage here at www.objectstoragecenter.com
Are VVOL’s accessed like other object storage (e.g. S3)?
No, VVOL’s are accessed via the VMware software and associated API’s that are supported by various storage providers. VVOL’s are not LUN’s like regular block (e.g. DAS or SAN) storage that use SAS, iSCSI, FC, FCoE, IBA/SRP, nor are they NAS volumes like NFS mount points. Likewise VVOL’s are not accessed using any of the various object storage access methods mentioned above (e.g. AWS S3, Rest, CDMI, etc) instead they are an application specific implementation. For some of you this approach of an applications specific or unique storage access method may be new, perhaps revolutionary, otoh, some of you might be having a DejaVu moment right about now.
VVOL is not a LUN in the context of what you may know and like (or hate, even if you have never worked with them), likewise it is not a NAS volume like you know (or have heard of), neither are they objects in the context of what you might have seen or heard such as S3 among others.
Keep in mind that what makes up a VMware virtual machine are the VMK, VMDK and some other files (shown in the figure below), and if enough information is known about where those blocks of data are or can be found, they can be worked upon. Also keep in mind that at least near-term, block is the lowest common denominator that all file systems and object repositories get built-up.
VMware ESXi storage I/O, IOPS and data store basics
Here is the thing, while VVOL’s will be accessible via a block interface such as iSCSI, FC or FCoE or for that matter, over Ethernet based IP using NFS. Think of these storage interfaces and access mechanisms as the general transport for how vSphere ESXi will communicate with the storage system (e.g. their data path) under vCenter management.
What is happening inside the storage system that will be presented back to ESXi will be different than a normal SCSI LUN contents and only understood by VMware hypervisor. ESXi will still tell the storage system what it wants to do including moving blocks of data. The storage system however will have more insight and awareness into the context of what those blocks of data mean. This is how the storage systems will be able to more closely integrate snapshots, replication, cloning and other functions by having awareness into which data to move, as opposed to moving or working with an entire LUN where a VMDK may live. Keep in mind that the storage system will still function as it normally would, just think of VVOL as another or new personality and access mechanism used for VMware to communicate and manage storage.
VMware VVOL concepts (in general) with VMDK being pushed down into the storage system
Think in terms of the iSCSI (or FC or something else) for block or NFS for NAS as being the addressing mechanism to communicate between ESXi and the storage array, except instead of traditional SCSI LUN access and mapping, more work and insight is pushed down into the array. Also keep in mind that with a LUN, it is simply an address from what to use Logical Block Numbers or Logical Block Addresses. In the case of a storage array, it in turn manages placement of data on SSD or HDDs in turn using blocks aka LBA/LBN’s In other words, a host that does not speak VVOL would get an error if trying to use a LUN or target on a storage system that is a VVOL, that’s assuming it is not masked or hidden ;).
What’s the Storage Provider (SP)
The Storage Provider aka SP is created by the, well, the provider of the storage system or appliance leveraging a VMware API (hint, sign up for the beta and there is an SDK). Simply put, the SP is a two-way communication mechanism leveraging VASA for reporting information, configuration and other insight up to VMware ESXi hypervisor, vCenter and other management tools. In addition the storage provider receives VASA configuration information from VMware about how to configure the storage system (e.g. storage containers). Keep in mind that the SP is the out of band management interface between the storage system supporting and presenting VVOL’s and VMware hypervisors.
What’s the Storage Container (SC)
This is a storage pool created on the storage array or appliance (e.g. VMware vCenter works with array and storage provider (SP) to create) in place of using a normal LUN. With a SP and PE, the storage container becomes visible to ESXi hosts, VVOL’s can be created in the storage container until it runs out of space. Also note that the storage container takes on the storage profile assigned to it which are inherited by the VVOLs in it. This is in place of presenting LUN’s to ESXi that you can then create VMFS data stores (or use as raw) and then carve storage to VMs.
Protocol endpoint (PE)
The PE provides visibility for the VMware hypervisor to see and access VMDK’s and other objects (e.g. .vmx, swap, etc) stored in VVOL’s. The protocol endpoint (PE) manages or directs I/O received from the VM enabling scaling across many virtual volumes leveraging multipathing of the PE (inherited by the VVOL’s.). Note that for storage I/O operations, the PE is simply a pass thru mechanism and does not store the VMDK or other contents. If using iSCSI, FC, FCoE or other SAN interface, then the PE works on a LUN basis (again not actually storing data), and if using NAS NFS, then with a mount point. Key point is that the PE gets out-of-the-way.
VVOL Poll
What are you VVOL plans, view results and cast your vote here
Wrap up (for now)
There certainly are many more details to VVOL’s. that you can get a preview of in the beta, a well as via various demos, webinars, VMworld sessions as more becomes public. However for now, hope you found this quick overview on VVOL’s. of use, since VVOL’s. at the time of this writing are not yet released, you will need to wait for more detailed info, or join the beta or poke around the web (for now). Also if you have not seen the first part overview to this piece, check it out here as I give some more links to get you started to learn more about VVOL’s.
Keep an eye on and learn more about VVOL’s. at VMworld 2014 as well as in various other venues.
IMHO VVOL’s. are or will be in your future, however the question will be is there going to be a back to the future moment for some of you with VVOL’s.?
What VVOL questions, comments and concerns are in your future and on your mind?
Ok, nuff said (for now)
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
VMware VVOLs storage I/O fundementals (Part 1)
VMware VVOL’s storage I/O fundamentals (Part I)
Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g.VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).
Some of you may already be participating in the VMware beta of VVOL involving one of the initial storage vendors also in the beta program.
Ok, now let’s go a bit deeper, however if you want some good music to listen to while reading this, check out @BruceRave GoDeepMusic.Net and shows here.
Taking a step back, digging deeper into Storage I/O and VVOL’s fundamentals
Instead of a VM host accessing its virtual disk (aka VMDK) which is stored in a VMFS formatted data store (part of ESXi hypervisor) built on top of a SCSI LUN (e.g. SAS, SATA, iSCSI, Fibre Channel aka FC, FCoE aka FC over Ethernet, IBA/SRP, etc) or an NFS file system presented by a storage system (or appliance), VVOL’s push more functionality and visibility down into the storage system. VVOL’s shift more intelligence and work from the hypervisor down into the storage system. Instead of a storage system simply presenting a SCSI LUN or NFS mount point and having limited (coarse) to no visibility into how the underlying storage bits, bytes as well as blocks are being used, storage systems gain more awareness.
Keep in mind that even files and objects still get ultimately mapped to pages and blocks aka sectors even on nand flash-based SSD’s. However also keep an eye on some new technology such as the Seagate Kinetic drive that instead of responding to SCSI block based commands, leverage object API’s and associated software on servers. Read more about these emerging trends here and here at objectstoragecenter.com. |
With a normal SCSI LUN the underlying storage system has no knowledge of how the upper level operating system, hypervisor, file system or application such as a database (doing raw IO) is allocating the pages or blocks of memory aka storage. It is up to the upper level storage and data management tools to map from objects and files to the corresponding extents, pages and logical block address (LBA) understood by the storage system. In the case of a NAS solution, there is a layer of abstractions placed over the underlying block storage handling file management and the associated file to LBA mapping activity.
Storage I/O and IOP basics and addressing: LBA’s and LBN’s
Getting back to VVOL, instead of simply presenting a LUN which is essentially a linear range of LBA’s (think of a big table or array) that the hypervisor then manages data placement and access, the storage system now gains insight into what LBA’s correspond to various entities such as a VMDK or VMX, log, clone, swap or other VMware objects. With this more insight, storage systems can now do native and more granular functions such as clone, replication, snapshot among others as opposed to simply working on a coarse LUN basis. The similar concepts extend over to NAS NFS based access. Granted, there are more to VVOL’s including ability to get the underlying storage system more closely integrated with the virtual machine, hypervisor and associated management including supported service manage and classes or categories of service across performance, availability, capacity, economics.
What about VVOL, VAAI and VASA?
VVOL’s are building from earlier VMware initiatives including VAAI and VASA. With VAAI, VMware hypervisor’s can off-load common functions to storage systems that support features such as copy, clone, zero copy among others like how a computer can off-load graphics processing to a graphics card if present.
VASA however provides a means for visibility, insight and awareness between the hypervisor and its associated management (e.g. vCenter etc) as well as the storage system. This includes storage systems being able to communicate and publish to VMware its capabilities for storage space capacity, availability, performance and configuration among other things.
With VVOL’s VASA gets leveraged for unidirectional (e.g. two-way) communication where VMware hypervisor and management tools can tell the storage system of things, configuration, activities to do among others. Hence why VASA is important to have in your VMware CASA.
What’s this object storage stuff?
VVOL’s are a form of object storage access in that they differ from traditional block (LUN’s) and files (NAS volumes/mount points). However, keep in mind that not all object storage are the same as there are object storage access and architectures.
Object Storage basics, generalities and block file relationships
Avoid making the mistake of when you hear object storage that means ANSI T10 (the folks that manage the SCSI command specifications) Object Storage Device (OSD) or something else. There are many different types of underlying object storage architectures some with block and file as well as object access front ends. Likewise there are many different types of object access that sit on top of object architectures as well as traditional storage system.
An example of how some object storage gets accessed (not VMware specific)
Also keep in mind that there are many different types of object access mechanism including HTTP Rest based, S3 (e.g. a common industry defacto standard based on Amazon Simple Storage Service), SNIA CDMI, SOAP, Torrent, XAM, JSON, XML, DICOM, IL7 just to name a few, not to mention various programmatic bindings or application specific implementations and API’s. Read more about object storage architectures, access and related topics, themes and trends at www.objecstoragecenter.com
Lets take a break here and when you are ready, click here to read the third piece in this series VMware VVOL’s and storage I/O fundamentals Part 2.
Ok, nuff said (for now)
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Are VMware VVOLs in your virtual server and storage I/O future?
Are VMware VVOL’s in your virtual server and storage I/O future?
Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).
With VMworld 2014 just around the corner, for some of you the question is not if Virtual Volumes (VVOL’s) are in your future, rather when, where, how and with what.
What this means is that for some hands on beta testing is already occurring or will be soon, while for others that might be around the corner or down the road.
Some of you may already be participating in the VMware beta of VVOL involving one of the first storage vendors also in the beta program.
On the other hand, some of you may not be in VMware centric environments and thus VVOL’s may not yet be in your vocabulary.
How do you know if VVOL are in your future if you don’t know what they are?
First, to be clear, as of the time this was written VMware VVOL’s are not released and only in beta as well as having been covered in earlier VMworld’s. Consequently what you are going to read here is based on what VVOL material has already been made public in various venues including earlier VMworld’s and VMware blogs among other places.
The quick synopsis of VMware VVOL’s overview:
VVOL considerations and your future
As mentioned, as of this writing, VVOL’s are still a future item granted they exist in beta.
For those of you in VMware environments, now is the time to add VVOL to your vocabulary which might mean simply taking the time to read a piece like this, or digging deeper into the theories of operations, configuration, usage, hints and tips, tutorials along with vendor specific implementations.
Explore your options, and ask yourself, do you want VVOL or do you need it
What support does your current vendor(s) have for VVOL or what is their statement of direction (SOD) which you might have to get from them under NDA.
This means that there will be some first vendors with some of their products supporting VVOL’s with more vendors and products following (hence watch for many statements of direction announcements).
Speaking of vendors, watch for a growing list of vendors to announce their current or plans for supporting VVOL’s, not to mention watch some of them jump up and down like Donkey in Shrek saying "oh oh pick me pick me".
When you ask a vendor if they support VVOL’s, move beyond the simple yes or no, ask which of their specific products, it is a block (e.g. iSCSI) or NAS file (e.g. NFS) based and other caveats or configuration options.
Watch for more information about VVOL’s in the weeks and months to come both from VMware along with from their storage provider partners.
How will VVOL impact your organizations best practices, policies, workflow’s including who does what, along with associated responsibilities.
Where to learn more
Check out the companion piece to this that takes a closer look at storage I/O and VMware VVOL fundamentals here and here.
Also check out this good VMware blog via Cormac Hogan (@CormacJHogan) that includes a video demo, granted its from 2012, however some of this stuff actually does take time and thus this is very timely. Speaking of VMware, Duncan Epping (aka @DuncanYB) at his Yellow-Bricks site has some good posts to check out as well with links to others including this here. Also check out the various VVOL related sessions at VMworld as well as the many existing, and soon to be many more blogs, articles and videos you can find via Google. And if you need a refresher, Why VASA is important to have in your VMware CASA.
Of course keep an eye here or whichever venue you happen to read this for future follow-up and companion posts, and if you have not done so, sign up for the beta here as there are lots of good material including SDKs, configuration guides and more.
VVOL Poll
What are you VVOL plans, view results and cast your vote here
Wrap up (for now)
Hope you found this quick overview on VVOL’s of use, since VVOL’s at the time of this writing are not yet released, you will need to wait for more detailed info, or join the beta or poke around the web (for now).
Keep an eye on and learn more about VVOL’s at VMworld 2014 as well as in various other venues.
IMHO VVOL’s are or will be in your future, however the question will be is there going to be a back to the future moment for some of you with VVOL’s?
Also what VVOL questions, comments and concerns are in your future and on your mind?
And remember to check out the second part to this series here.
Ok, nuff said (for now)
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Despite being declared dead, Fibre Channel continues to evolve with FC-BB-6
Despite being declared dead, Fibre Channel continues to evolve with FC-BB-6
Like many technologies that have been around for more than a decade or two, they often get declared dead when something new appears and Fibre Channel (FC) for networking with your servers and storage falls into that category. It seems like just yesterday when iSCSI was appearing on the storage networking scene in the early 2000s that FC was declared dead yet it remains and continues to evolve including moving over Ethernet with FC over Ethernet (FCoE).
Recently the Fibre Channel Industry Association (FCIA) made an announcement on the continued development and enhancements including FC-BB-6 that applies to both "classic" or "legacy" FC as well as the newer and emerging FCoE implementations. FCIA is not alone in the FCIA activity as they are as the name implies the industry consortium that works with the T11 standards folks. T11 is a Technical Committee of the International Committee on Information Technology Standards (INCITS, pronounced "insights").
Keep in mind that a couple of pieces to Fibre Channel which are the upper levels and lower level transports.
With FCoE, the upper level portions get mapped natively on Ethernet without having to map on top of IP as happens with distance extension using FCIP.
Likewise FCoE is more than simply mapping one of the FC upper level protocols (ULPs) such as the SCSI command set (aka SCSI_FCP) on IP (e.g. iSCSI). Think of ULPs almost in a way as a guest that gets transported or carried across the network, however lets also be careful not to play the software defined network (SDN) or virtual network, network virtualization or IO virtualization (IOV) card, or at least yet, we will leave that up to some creative marketers ;).
At the heart of the Fibre Channel beyond the cable and encoding scheme are a set of protocols, command sets and one in particular is FC Backbone now in its 6th version (read more here at the T11 site, or here at the SNIA site).
Some of the highlights of the FCIA announcement include:
VN2VN connectivity support enabling direct point to point virtual links (not to be confused with point to point physical cabling) between nodes in an FCoE network simplifying configurations for smaller SAN networks where zoning might not be needed (e.g. remove complexity and cost).
Support for Domain ID scalability including more efficient use by FCoE fabrics enabling large scalability of converged SANs. Also keep an eye on the emerging T11 FC-SW-6 distributed switch architecture for implementation over Ethernet in final stages of development.
Here are my perspectives on this announcement by the FCIA:
"Fibre Channel is a proven protocol for networked data center storage that just got better," said Greg Schulz, founder StorageIO. "The FC-BB-6 standard helps to unlock the full potential of the Fibre Channel protocol that can be implemented on traditional Fibre Channel as well as via Ethernet based networks. This means FC-BB-6 enabled Fibre Channel protocol based networks give flexibility, scalability and secure high-performance resilient storage networks to be implemented." |
Both "classic" or "legacy" Fibre Channel based cabling and networking are still alive with a road map that you can view here.
However FCoE also continues to mature and evolve and in some ways, FC-BB-6 and its associated technologies and capabilities can be seen as the bridge between the past and the future. Thus while the role of both FC and FCoE along with other ways of of networking with your servers and storage continue to evolve, so to does the technology. Also keep in mind that not everything is the same in the data center or information factory which is why we have different types of server, storage and I/O networks to address different needs, requirements and preferences.
Additional reading and viewing on FC, FCoE and storage networking::
Ok, nuff said (for now)
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
July 2014 Server and StorageIO Update newsletter
Server and StorageIO Update newsletter – July 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
June 2014 Server and StorageIO Update newsletter
Server and StorageIO Update newsletter – June 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
April and May 2014 Server and StorageIO Update newsletter
Server and StorageIO Update newsletter – April and May 2014 | ||||||||||||||||||||||||||||||||||||||||||||
|
Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy
Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy
The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future.
Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way.
Introducing Solid State Hybrid Drives (SSHD)
Solid State Hybrid Disks (SSHD) are the successors to previous generation Hybrid Hard Disk Drives (HHDD) that I have used for several years (you can read more about them here, and here).
While it would be nice to simply have SSD for everything, there are also economic budget realities to be dealt with. Keep in mind that a bit of nand flash SSD cache in the right location for a given purpose can go a long way which is the case with SSHDs. This is also why in many environments today there is a mix of SSD, HDD of various makes, types, speeds and capacities (e.g. different tiers) to support diverse application needs (e.g. not everything in the data center is the same).
However, If you have the need for speed and can afford or benefit from the increased productivity by all means go SSD!
Otoh if you have budget constraints and need more space capacity yet want some performance boost, then SSHDs are an option. The big difference however between today’s SSHDs that are available for both enterprise class storage systems and servers, as well as desktop environments is that they can accelerate both reads and writes. This is different from their predecessors that I have used for several years now that had basic read acceleration, however no write optimizations.
Better Together: Where SSHDs fit in an enterprise tiered storage environment with SSD and HDDs
As their names imply, they are a hybrid between a nand flash Solid State Device (SSD) and traditional Hard Disk Drive (HDD) meaning a best of situation. This means that the SSHD are based on a traditional spinning HDD (various models with different speeds, space capacity, interfaces) along with DRAM (which is found on most modern HDDs), along with nand flash for read cache, and some extra nonvolatile memory for persistent write cache combined with a bit of software defined storage performance optimization algorithms.
Btw, if you were paying attention to that last sentence you would have picked up on something about nonvolatile memory being used for persistent write cache which should prompt the question would that help with nand flash write endurance? Yup.
Where and when to use SSHD?
In the StorageIO Industry Trends Perspective thought leadership white paper I recently released compliments of Seagate Enterprise Turbo SSHD (that’s a disclosure btw ;) enterprise class Solid State Hybrid Drives (SSHD) were looked at and test driven in the StorageIO Labs with various application workloads. These activities include being in a virtual environment for common applications including database and email messaging using industry standard benchmark workloads (e.g. TPC-B and TPC-E for database, JetStress for Exchange).
Conventional storage system focused workloads using iometer, iorate and vdbench were also run in the StorageIO Labs to set up baseline reads, writes, random, sequential, small and large I/O size with IOPs, bandwidth and response time latency results. Some of those results can be found here (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?) with other ongoing workloads continuing in different configurations. The various test drive proof points were done in the comparing SSHD, SSD and different HDDs.
Data Protection (Archiving, Backup, BC, DR) | Staging cache buffer area for snapshots, replication or current copies before streaming to other storage tier using fast read/write capabilities. Meta data, index and catalogs benefit from fast reads and writes for faster protection. |
Big Data DSS | Support sequential read-ahead operations and “hot-band” data caching in a cost-effective way using SSHD vs. slower similar capacity size HDDs for Data warehouse, DSS and other analytic environments. |
Email, Text and Voice Messaging | Microsoft Exchange and other email journals, mailbox or object repositories can leverage faster read and write I/Os with more space capacity. |
OLTP, Database | Eliminate the need to short stroke HDDs to gain performance, offer more space capacity and IOP performance per device for tables, logs, journals, import/export and scratch, temporary ephemeral storage. Leverage random and sequential read acceleration to compliment server-side SSD-based read and write-thru caching. Utilize fast magnetic media for persistent data reducing wear and tear on more costly flash SSD storage devices. |
Server Virtualization | Fast disk storage for data stores and virtual disks supporting VMware vSphere/ESXi, Microsoft Hyper-V, KVM, Xen and others. Holding virtual machines such as VMware VMDKs, along with Hyper-V and other hypervisor virtual disks. Compliment virtual server read cache and I/O optimization using SSD as a cache with writes going to fast SSHD. For example VMware V5.5 Virtual SAN host disk groups use SSD as a read cache and can use SSHD as the magnetic disk for storing data while boosting performance without breaking the budget or adding complexity. Speaking of Virtual, as mentioned the various proof points were run using Windows systems that were VMware guests with the SSHD and other devices being Raw Device Mapped (RDM) SAS and SATA attached, read how to do that here. Hint: If you know about the VMware trick for making a HDD look like a SSD to vSphere/ESXi (refer to here and here) think outside the virtual box for a moment on some things you could do with SSHD in a VSAN environment among other things, for now, just sayin ;). |
Virtual Desktop Infrastructure (VDI) | SSHD can be used as high performance magnetic disk for storing linked clone images, applications and data. Leverage fast read to support read ahead or pre-fetch to compliment SSD based read cache solutions. Utilize fast writes to quickly store data enabling SSD-based read or write-thru cache solutions to be more effective. Reduce impact of boot, shutdown, and virus scan or maintenance storms while providing more space capacity. |
Table 1 Example application and workload scenarios benefiting from SSHDs
Test drive application proof points
Various workloads were run using Seagate Enterprise Turbo SSHD in the StorageIO lab environment across different real world like application workload scenarios. These include general storage I/O performance characteristics profiling (e.g. reads, writes, random, sequential or various IOP size) to understand how these devices compare to other HDD, HHDD and SSD storage devices in terms of IOPS, bandwidth and response time (latency). In addition to basic storage I/O profiling, the Enterprise Turbo SSHD was also used with various SQL database workloads including Transaction Processing Council (TPC); along with VMware server virtualization among others use case scenarios.
Note that in the following workload proof points a single drive was used meaning that using more drives in a server or storage system should yield better performance. This also means scaling would be bound by the constraints of a given configuration, server or storage system. These were also conducted using 6Gbps SAS with PCIe Gen 2 based servers and ongoing testing is confirming even better results with 12Gbs SAS, faster servers with PCIe Gen 3.
Copy (read and write) 80GB and 220GB file copies (time to copy entire file)
SQLserver TPC-B batch database updates
Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 500GB 3.5” 7.2K RPM HDD 3 Gbps SATA, 1TB 3.5” 7.2K RPM HDD 3 Gbps SATA. Workload generator and virtual clients ran on Windows 7 Ultimate. Microsoft SQL Server 2012 Database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (Intel x3490 2.93 GHz)), with LSI 9211 6Gbps SAS adapters with TPC-B (www.tpc.org) workloads. VM resided on separate data store from devices being tested. All devices being tested with SQL MDF were Raw Device Mapped (RDM) independent persistent with database log file (LDF) on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.
SQLserver TPC-E transactional workload
Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 300GB 2.5” Savio 10K RPM HDD 6 Gbps SAS, 1TB 3.5” 7.2K RPM HDD 6 Gbps SATA. Workload generator and virtual clients Windows 7 Ultimate. Microsoft SQL Server 2012 database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (E8400 2.99GHz), with LSI 9211 6Gbps SAS adapters with TPC-E (www.tpc.org) workloads. VM resided on separate SSD based data store from devices being tested (e.g., where MDF resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.
Microsoft Exchange workload
Test configuration: 2.5” Seagate 600 Pro 120GB (ST120FP0021 ) SSD 6 Gbps SATA, 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 2.5” Savio 146GB HDD 6 Gbps SAS, 3.5” Barracuda 500GB 7.2K RPM HDD 3 Gbps SATA. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft Small Business Server (SBS) 2011 Service Pack 1 64 bit, 8GB DRAM, One CPU (Intel X3490 2.93 GHz) LSI 9211 6 Gbps SAS adapter, JetStress 2010 (no other active workload during test intervals). All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device.
Read more about the above proof points along view data points and configuration information in the associated white paper found here (no registration required).
What this all means
Similar to flash-based SSD technologies the question is not if, rather when, where, why and how to deploy hybrid solutions such as SSHDs. If your applications and data infrastructures environment have the need for storage I/O speed without loss of space capacity and breaking your budget, SSD enabled devices like the Seagate Enterprise Turbo 600GB SSHD are in your future. You can learn more about enterprise class SSHD such as those from Seagate by visiting this link here.
Watch for extra workload proof points being performed including with 12Gbps SAS and faster servers using PCIe Gen 3.
Ok, nuff said.
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Data Storage Innovation Chat with SNIA Wayne Adams and David
Data Storage Innovation Chat with SNIA Wayne Adams and David
In this episode, SNIA Chairman Emeritus Wayne Adams and current Chairman David Dale join me in a conversation from the Data Storage Innovation Conference (DSICON) 2014 conference event.
SNIA Chairman David Dale opening remarks SNIA DSICON 2014
SNIA DSI Conference (DSICON), CDMI Conformance Testing and other updates
DSICON is a new event produced by SNIA targeted for IT professionals involved with data storage related topics, themes, technologies and tools spanning hardware, software, cloud, virtual and physical. In this conversation, we talk about the new DSI event, the diversity of new attendees who are attending their first SNIA event, along with other updates. Some of these updates include what is new with the SNIA Cloud Data Management Initiative (CDMI), Non Volatile Memory (think flash and SSD), SMIS, education and more. In addition to the DSICON event, SNIA also announced CDMI Cloud Interoperability Conformance Test Program is now available for cloud solution vendors and providers.
DSI, Santa Clara, CA (April 22, 2014)— The Storage Networking Industry Association (SNIA), today announced the launch of a Cloud Data Management Interface (CDMI) Conformance Test Program (CTP)that validates cloud products’ conformance to the ISO/IEC CDMI standard for cloud data interoperability(ISO catalog number ISO/IEC 17826:2012). Cloud solutions that pass the CDMI CTP offer cloud consumers assurance that the CDMI standard has been properly implemented and that data stored in any conformant implementation will be transportable to any other conformant implementation. |
Here is a perspective commentary quote that I issued which was included in the SNIA Press Release.
“Today, the cloud market is crowded with a slew of vendors offering different solutions for migration, data management and security, often leaving IT customers confused about the right solution for their requirements,” said Greg Schulz, founder of StorageIO, a storage technology advisory and consulting firm. “SNIA’s CDMI Conformance Test Program is a great step forward helping IT customers, VARs or others in the industry navigate their way through the fog of cloud interoperability requirements in a streamlined fashion, not to mention laying standard routes vendors will want to adopt going forward.” |
Check out the full SNIA CDMI press release announcement for the conformance testing here, as well as learn more about CDMI here.
Listen in to our podcast conversation here as we cover cloud, convergence, software defined and more about data storage.
Topics and themes discussed:
Check out SNIA and DSICON listen in to the conversation with David Dale and Wayne Adams here.
Ok, nuff said.
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Chat with Cash Coleman talking ClearDB, cloud database and Johnny Cash
Podcast with Cash Coleman talking ClearDB, cloud database and Johnny Cash
In this episode from the SNIA DSI 2014 event I am joined by Cashton Coleman (@Cash_Coleman).
Introducing Cashton (Cash) Coleman and ClearDB
Cashton (Cash) is a Software architect, product mason, family bonder, life builder, idea founder along with Founder & CEO of SuccessBricks, Inc., makers of ClearDB. ClearDB is a provider of MySQL database software tools for cloud and physical environments. In our conversation talk about ClearDB, what they do and whom they do it with including deployments in cloud’s as well as onsite. For example if you are using some of the Microsoft Azure cloud services using MySQL, you may already be using this technology. However, there is more to the story and discussion including how Cash got his name, how to speed up databases for little and big data among other topics.
If you are a database person, you will want to listen to what Cash has to say about boosting performance and getting more value out of your physical hardware or cloud services. On the other hand if you are a storage person, listen in to get some insight and ideas on to address database performance and resiliency. For others who just like to listen to new trends, technology talk, or hear about emerging companies to keep an eye on, you wont want to miss the podcast conversation.
Topics and themes discussed:
Check out ClearDB and listen in to the conversation with Cash podcast here.
Ok, nuff said.
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Lenovo TS140 Server and Storage I/O Review
Lenovo TS140 Server and Storage I/O Review
This is a review that looks at my recent hands on experiences in using a TS140 (Model MT-M 70A4 – 001RUS) pedestal (aka tower) server that the Lenovo folks sent to me to use for a month or so. The TS140 is one of the servers that Lenovo had prior to its acquisition of IBM x86 server business that you can read about here.
The Lenovo TS140 Experience
Lets start with the overall experience which was very easy and good. This includes going from initial answering some questions to get the process moving, agreeing to keep the equipment safe, secure, insured as well as not damaging anything (this was not a tear down and rip it apart into pieces trial).
Part of the process also involved answering some configuration related questions and shortly there after a large box from Lenovo arrived. Turns out it was a box (server hardware) inside of a Lenovo box, that was inside a slightly larger unmarked shipping box (see larger box in the background).
TS140 shipment undergoing initial security screen scan and sniff (all was ok)
TS140 with Keyboard and Mouse (Monitor not included)
One of the reasons I have a photo of the TS140 on a desk is that I initially put it in an office environment as Lenovo claimed it would be quiet enough to do so. I was not surprised and indeed the TS140 is quiet enough to be used where you would normally find a workstation or mini-tower. By being so quiet the TS140 is a good fit for environments that need a small or starter server that has to go into an office environment as opposed to a server or networking room. For those who are into mounting servers, there is the option for placing the TS140 on its side into a cabinet or rack.
TS140 with Windows Server 2012 Essentials
TS140 as tested
TS140 "Selfie" with 4 x 4GB DDR3 DIMM (16GB) and PCIe slots (empty)
16GB RAM (4 x 4GB DDR3 UDIMM, larger DIMMs are supported)
Windows Server 2012 Essentials
Intel Xeon E3-1225 v3 @3.2 Ghz quad (C226 chipset and TPM 1.2) vPRO/VT/EP capable
Intel GbE 1217-LM Network connection
280 watt power supply
Keyboard and mouse (no monitor)
Two 7.2K SATA HDDs (WD) configured as RAID 1 (100GB Lun)
Slot 1 PCIe G3 x16
Slot 2 PCIe G2 x1
Slot 3 PCIe G2 x16 (x4 electrical signal)
Slot 4 PCI (legacy)
Onboard 6GB SATA RAID 0/1/10/5
Onboard SATSA 3.0 (6Gbps) connectors (0-4), USB 3.0 and USB 2.0
Read more about what I did with the Lenovo TS140 in part II of my review along with what I liked, did not like and general comments here.
Ok, nuff said (for now)
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Part II: What I did with Lenovo TS140 in my Server and Storage I/O Review
Part II: Lenovo TS140 Server and Storage I/O Review
This is the second of a two-part post series on my recent experiences with a Lenovo TS140 Server, you can read part I here.
What Did I do with the TS140
After initial check out in an office type environment, I moved the TS140 into the lab area where it joined other servers to be used for various things.
Some of those activities included using the Windows Server 2012 Essentials along with associated admin activities. Also, I also installed VMware ESXi 5.5 and ran into a few surprises. One of those was that I needed to apply an update to VMware drivers to support the onboard Intel NIC, as well as enable VT and EP modes for virtualization to assist via the BIOS. The biggest surprise was that I discovered I could not install VMware onto an internal drive attached via one of the internal SATA ports which turns out to be a BIOS firmware issue.
Lenovo confirmed this when I brought it to their attention, and the workaround is to use USB to install VMware onto a USB flash SSD thumb drive, or other USB attached drive or to use external storage via an adapter. As of this time Lenovo is aware of the VMware issue, however, no date for new BIOS or firmware is available. Speaking of BIOS, I did notice that there was some newer BIOS and firmware available (FBKT70AUS December 2013) than what was installed (FB48A August of 2013). So I went ahead and did this upgrade which was a smooth, quick and easy process. The process included going to the Lenovo site (see resource links below), selecting the applicable download, and then installing it following the directions.
Since I was going to install various PCIe SAS adapters into the TS140 attached to external SAS and SATA storage, this was not a big issue, more of an inconvenience Likewise for using storage mounted internally the workaround is to use an SAS or SATA adapter with internal ports (or cable). Speaking of USB workarounds, have a HDD, HHDD, SSHD or SSD that is a SATA device and need to attach it to USB, then get one of these cables. Note that there are USB 3.0 and USB 2.0 cables (see below) available so choose wisely.
USB to SATA adapter cable
In addition to running various VMware-based workloads with different guest VMs.
I also ran FUTREMARK PCmark (btw, if you do not have this in your server storage I/O toolbox it should be) to gauge the systems performance. As mentioned the TS140 is quiet. However, it also has good performance depending on what processor you select. Note that while the TS140 has a list price as of the time of this post under $400 USD, that will change depending on which processor, amount of memory, software and other options you choose.
PCmark test | Results |
Composite score | 2274 |
Compute | 11530 |
System Storage | 2429 |
Secondary Storage | 2428 |
Productivity | 1682 |
Lightweight | 2137 |
PCmark results are shown above for the Windows Server 2012 system (non-virtualized) configured as shipped and received from Lenovo.
What I liked
Unbelievably quiet which may not seem like a big deal, however, if you are looking to deploy a server or system into a small office workspace, this becomes an important considerations. Otoh, if you are a power user and want a robust server that can be installed into a home media entertainment system, well, this might be a nice to have consideration ;).
Something else that I liked is that the TS140 with the E3-1220 v3 family of processor supports PCIe G3 adapters which are useful if you are going to be using 10GbE cards or 12Gbs SAS and faster cards to move lots of data, support more IOPs or reduce response time latency.
In addition, while only 4 DIMM slots is not very much, its more than what some other similar focused systems have, plus with large capacity DIMMs, you can still get a nice system, or two, or three or four for a cluster at a good price or value (Hmm, VSAN anybody?). Also while not a big item, the TS140 did not require ordering an HDD or SSD if you are not also ordering software the system for a diskless system and have your own.
Speaking of IO slots, naturally I’m interested in Server Storage I/O so having multiple slots is a must have, along with the processor that is quad core (pretty much standard these days) along with VT and EP for supporting VMware (these were disabled in the BIOS. However, that was an easy fix).
Then there is the price as of this posting starting at $379 USD which is a bare bones system (e.g. minimal memory, basic processor, no software) whose price increases as you add more items. What I like about this price is that it has the PCIe G3 slot as well as other PCIe G2 slots for expansion meaning I can install 12Gbps (or 6Gbps) SAS storage I/O adapters, or other PCIe cards including SSD, RAID, 10GbE CNA or other cards to meet various needs including software defined storage.
What I did not like
I would like to have had at least six vs. four DIMM slots, however keeping in mind the price point of where this system is positioned, not to mention what you could do with it thinking outside of the box, I’m fine with only 4 x DIMM. Space for more internal storage would be nice, however, if that is what you need, then there are the larger Lenovo models to look at. By the way, thinking outside of the box, could you do something like a Hadoop, OpenStack, Object Storage, VMware VSAN or other cluster with these in addition to using as a Windows Server?
Yup.
Granted you won’t have as much internal storage, as the TS140 only has two fixed drive slots (for more storage there is the model TD340 among others).
However it is not that difficult to add more (not Lenovo endorsed) by adding a StarTech enclosure like I did with my other systems (see here). Oh and those extra PCIe slots, that’s where a 12Gbs (or 6Gbps) adapter comes into play while leaving room for GbE cards and PCIe SSD cards. Btw not sure what to do with that PCIe x1 slot, that’s a good place for a dual GbE NIC to add more networking ports or an SATA adapter for attaching to larger capacity slower drives.
StarTech 2.5″ SAS SATA drive enclosure via Amazon.com
If VMware is not a requirement, and you need a good entry level server for a large SOHO or small SMB environment, or, if you are looking to add a flexible server to a lab or for other things the TS140 is good (see disclosure below) and quiet.
Otoh as mentioned, there is a current issue with the BIOS/firmware with the TS140 involving VMware (tried ESXi 5 & 5.5).
However I did find a workaround which is that the current TS140 BIOS/Firmware does work with VMware if you install onto a USB drive, and then use external SAS, SATA or other accessible storage which is how I ended up using it.
Lenovo TS140 resources include
Summary
Disclosure: Lenovo loaned the TS140 to me for just under two months including covering shipping costs at no charge (to them or to me) hence this is not a sponsored post or review. On the other hand I have placed an order for a new TS140 similar to the one tested that I bought on-line from Lenovo.
This new TS140 server that I bought joins the Dell Inspiron I added late last year (read more about that here) as well as other HP and Dell systems.
Overall I give the Lenovo TS140 an provisional "A" which would be a solid "A" once the BIOS/firmware issue mentioned above is resolved for VMware. Otoh, if you are not concerned about using the TS140 for VMware (or can do a work around), then consider it as an "A".
As mentioned above, I liked it so much I actually bought one to add to my collection.
Ok, nuff said (for now)
Cheers
Gs
Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved
Nand flash SSD NVM SCM server storage I/O memory conversations
Updated 8/31/19
The SSD Place NVM, SCM, PMEM, Flash, Optane, 3D XPoint, MRAM, NVMe Server, Storage, I/O Topics
Now and then somebody asks me if I’m familiar with flash or nand flash Solid State Devices (SSD) along with other non-volatile memory (NVM) technologies and trends including NVM Express (NVMe).
Having been involved with various types of SSD technology, products and solutions since the late 80s initially as a customer in IT (including as a lunch customer for DEC’s ESE20 SSD’s), then later as a vendor selling SSD solutions, as well as an analyst and advisory consultant cover the technologies, I tell the person asking, well, yes, of course.
That gave me the idea as well as to help me keep track of some of the content and make it easy to find by putting it here in this post (which will be updated now and then).
Thus this is a collection of articles, tips, posts, presentations, blog posts and other content on SSD including nand flash drives, PCIe cards, DIMMs, NVM Express (NVMe), hybrid and other storage solutions along with related themes.
Also if you can’t find it here, you can always do a Google search like this or this to find some more material (some of which is on this page).
Flash SSD Articles, posts and presentations
The following are some of my tips, articles, blog posts, presentations and other content on SSD. Keep in mind that the question should not be if SSD are in your future, rather when, where, with what, from whom and how much. Also keep in mind that a bit of SSD as storage or cache in the right place can go a long way, while a lot of SSD will give you a benefit however also cost a lot of cash.
- NVMe overview and primer – Part I
- Part II – NVMe overview and primer (Different Configurations)
- Part III – NVMe overview and primer (Need for Performance Speed)
- Part IV – NVMe overview and primer (Where and How to use NVMe)
- Part V – NVMe overview and primer (Where to learn more, what this all means)
- PCIe Server I/O Fundamentals
- If NVMe is the answer, what are the questions?
- NVMe Wont Replace Flash By Itself
- Via Computerweekly – NVMe discussion: PCIe card vs U.2 and M.2
- Server storage I/O benchmark tools, workload scripts and examples (Part I) and (Part II)
- Via GizModo: Comments on Intel Optane 800P NVMe M.2 SSD
- Via InfoStor: 8 Big Enterprise SSD Trends to Expect in 2017
- Why NVMe Should Be in Your Data Center – Preparing for Tomorrow’s Data Center Today (StorageIO Guest Post Via Micron.com)
- Via SearchStoragae: Comments on Top 10 Tips on Solid State Storage Adoption Strategy
- Via J Metz’s Blog – Vendor neutral bibliography of material by subject matter for NVMe
- Via InfoStor – SSD Trends, Tips and Topics
- StorageIOblog: Get in the NVMe SSD game (if you are not already)
- Via J Metz’s Blog – Vendor neutral learning NVMe A Program of Study
- Via StorageIOblog: VMware vSAN v6.6
- Via StorageIOblog: Cisco announces 32Gb FC and NVMe fabrics
- Via Pure Storage: Announces new NVMe storage
- Via Micron Blog (Guest Post by Greg Schulz): What’s next for NVMe and your Data Center – Preparing for Tomorrow Today
- Via ChannelProSMB: Comments on NVMe (and SSD) and server storage I/O
- EnterpriseStorageForum: Comments Top 10 Enterprise SSD Market Trends
- SearchSolidStateStorage: Comments on How to add solid-state storage to your enterprise data storage systems
- Microsoft TechNet: Understand the cache in Storage Spaces Direct
- Microsoft Technet: Don’t do it: consumer-grade solid-state drives (SSD) in Storage Spaces Direct
- Why Micron NVMe SSDs (Via Micron.com)
- New Path to Storage I/O Performance and Resiliency With NVMe (Via Micron.com)
- How NVMe Will Revolutionize Server and Storage I/O(Via Micron.com)
- How to Prepare for the NVMe Server Storage I/O Wave (Via Micron.com)
- Why NVMe Should Be in Your Data Center (Via Micron.com)
- NVMe U2 (8639) vs. M2 interfaces (Via Gamersnexus)
- Enmotus FuzeDrive MicroTiering (StorageIO Lab Report)
- EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
- Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
- NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
- Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
- Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
- MNVM Express solutions (Via SuperMicro)
- Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 (Via StorageIOblog)
- PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
- RoCE updates among other items (Via InfiniBand Trade Association (IBTA) December Newsletter)
- NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
- What should I consider when using SSD cloud? (Via SearchCloudStorage)
- MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips)– PDF
- Selecting Storage: Start With Requirements (Via NetworkComputing)
- PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
- Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
- Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
- How many IOPS can a HDD, HHDD or SSD do (Part I)?
- How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
- I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
- Via EnterpriseStorageForum: 5 Hot Storage Technologies to Watch
- Via EnterpriseStorageForum: 10-Year Review of Data Storage
- Via EnterpriseStorageForum: Where All-Flash Storage Makes No Sense
- Via EnterpriseStorageForum: Top Tips for Enterprise SSD Form Factor Selection and Deployment
- Who Will Be Top Of Storage World?
- Intel announces new processors
- Server Storage I/O CI, HCI overview
- Data Infrastructure Tradecraft Overview
- SSD, flash and NVM Trends
- Top Tips for Enterprise SSD Form Factor Selection and Deployment
- StorageIO Industry Trends Perspective White Paper: Solid State Hybrid Drives (SSHD) aka Turbo Drives
- Via EnterpriseStorageForum: Software-Defined Storage Tips
- Via EnterpriseStorageForum: What SSD form factor is best
- Via InfoStor: SSD Trends, Tips and Topics
- Via StorageIOlab (ITCentralStation): Intel NVMe 750 SSD review
- Via Microsoft Technet: Don’t do it: consumer-grade solid-state drives (SSD) in Storage Spaces Direct
- EnterpriseStorageForum: Comments Top 10 Enterprise SSD Market Trends
- SearchSolidStateStorage: Comments on How to add solid-state storage to your enterprise data storage systems
- StorageIOblog: Some server storage I/O benchmark tools, workload scripts and examples (Part I) and (Part II)
- StorageIOblog: Get in the NVMe SSD game (if you are not already)
- Doridmen.com: Transcend SSD360S Review with tips on using ATTO and Crystal benchmark tools
- Webinar and Blog: 12Gb SAS SSD Enabling Server Storage I/O Performance and Effectiveness Webinar
- Enmotus FuzeDrive MicroTiering (StorageIO Lab Report
- Microsoft TechNet: Understand the cache in Storage Spaces Direct
- Via StorageIOlab (ITCentralStation): Intel NVMe 750 SSD review
- Via Microsoft Technet: Don’t do it: consumer-grade solid-state drives (SSD) in Storage Spaces Direct
- EnterpriseStorageForum: Comments Top 10 Enterprise SSD Market Trends
- SearchSolidStateStorage: Comments on How to add solid-state storage to your enterprise data storage systems
- StorageIOblog: Some server storage I/O benchmark tools, workload scripts and examples (Part I) and (Part II)
- StorageIOblog: Get in the NVMe SSD game (if you are not already)
- Doridmen.com: Transcend SSD360S Review with tips on using ATTO and Crystal benchmark tools
- Webinar and Blog: 12Gb SAS SSD Enabling Server Storage I/O Performance and Effectiveness Webinar
- Via Micron Blog (Guest Post by Greg Schulz): What’s next for NVMe and your Data Center – Preparing for Tomorrow Today
- AFA over NVMe Fabric (Via Zstor)
- Enmotus FuzeDrive MicroTiering (StorageIO Lab Report)
- Microsoft TechNet: Understand the cache in Storage Spaces Direct
- Via CustomPCreview: Samsung SM961 PCIe NVMe SSD Shows Up for Pre-Order
- StorageIO Industry Trends Perspective White Paper: Seagate 1200 Enterprise SSD (12Gbps SAS) with proof points (e.g. Lab test results)
- Companion: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review (blog post part I and Part II)
- NewEggBusiness: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review Are NVMe m.2 drives ready for the limelight?
- Google (Research White Paper): Disks for Data Centers (vs. just SSD)
- CMU (PDF White Paper): A Large-Scale Study of Flash Memory Failures in the Field
- Via ZDnet: Google doubles Cloud Compute local SSD capacity: Now it’s 3TB per VM
- EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
- Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
- NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
- Here’s why Western Digital is buying SanDisk (Via ComputerWorld)
- HP, SanDisk partner to bring storage-class memory to market (Via ComputerWorld)
- Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
- Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
- PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
- Seagate Grows Its Nytro Enterprise Flash Storage Line (Via InfoStor)
- New SAS Solid State Drive First Product From Seagate Micron Alliance (Via Seagate)
- Wow, Samsung’s New 16 Terabyte SSD Is the World’s Largest Hard Drive (Via Gizmodo)
- Samsung ups the SSD ante with faster, higher capacity drives (Via ITworld)
- PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
- New SATA SSD powers elastic cloud agility for CSPs (Via Cbronline)
- Toshiba Solid-State Drive Family Features PCIe Technology (Via Eweek)
- SanDisk aims CloudSpeed Ultra SSD at cloud providers (Via ITwire)
- Everspin & Aupera reveal all-MRAM Storage Module in M.2 Form Factor (Via BusinessWire)
- Intel, Micron Launch “Bulk-Switching” ReRAM (Via EEtimes)
- Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
- Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
- NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
- What should I consider when using SSD cloud? (Via SearchCloudStorage)
- MSP CMG, September 2014 Presentation (Flash back to reality – Myths and Realities Flash and SSD Industry trends perspectives plus benchmarking tips) – PDF
- Selecting Storage: Start With Requirements (Via NetworkComputing)
- Spot The Newest & Best Server Trends (Via Processor)
- Market ripe for embedded flash storage as prices drop (Via Powermore (Dell))
- 2015 Tech Preview: SSD and SMBs (Via ChannelProNetworks )
- How to test your HDD, SSD or all flash array (AFA) storage fundamentals (Via StorageIOBlog)
- Processor: Comments on What Abandoned Data Is Costing Your Company
- Processor: Comments on Match Application Needs & Infrastructure Capabilities
- Processor: Comments on Explore The Argument For Flash-Based Storage
- Processor: Comments on Understand The True Cost Of Acquiring More Storage
- Processor: Comments on What Resilient & Highly Available Mean
- Processor: Explore The Argument For Flash-Based Storage
- SearchCloudStorage What should I consider when using SSD cloud?
- StorageSearch.com: (not to be confused with TechTarget, good site with lots of SSD related content)
- StorageSearch.com: What kind of SSD world… 2015?
- StorageSearch.com: Various links about SSD
- FlashStorage.com: (Various flash links curated by Tegile and analyst firm Actual Tech Media [Scott D. Lowe])
- StorageSearch.com: How fast can your SSD run backwards?
- Seagate has shipped over 10 Million storage HHDD’s (SSHDs), is that a lot?
- Are large storage arrays dead at the hands of SSD?
- Can we get a side of context with them IOPS and other storage metrics?
- Cisco buys Whiptail continuing the SSD storage I/O flash cash cache dash
- EMC VFCache respinning SSD and intelligent caching (Part I)
- Flash Data Storage: Myth vs. Reality (Via InfoStor)
- Have SSDs been unsuccessful with storage arrays (with poll)?
- How many IOPS can a HDD, HHDD or SSD do (Part I)?
- How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
- I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
- IBM buys flash solid state device (SSD) industry veteran TMS
- Is SSD dead? No, however some vendors might be
- Is SSD Only for Performance?
- NetApp EF540, something familiar, something new
- Researchers and marketers don’t agree on future of nand flash SSD
- Server and Storage IO Memory: DRAM and nand flash
- Some of my experiences with Hybrid Hard Disk Drive (HHDD) series of posts based on what I use in my laptops and others systems including Seagate Momentus XT here, here, here, and here (along with Samsung SSD 512GB 840 that I upgraded from a 256GB 830).
- What, When, Why & How to Accelerate Storage (Via Storage Acceleration)
- Tips for Measuring Your Storage Acceleration (Via Storage Acceleration)
- Speaking of speeding up business with SSD storage
- Speaking of SSDs (with poll) With Poll, Cast Your Vote Here
- Spiceworks SSD and related conversation here and here, profiling IOPs here, and SSD endurance here.
- SSD is in your future, How, when, with what and where you will be using it (PDF Presentation)
- SSD for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD (Via TheVirtualizationPractice), Part II, The call to duty, SSD endurance, Part III What SSD is best for you?, and Part IV what’s best for your needs.
- IT and storage economics 101, supply and demand
- SSD, flash and DRAM, DejaVu or something new?
- The Many Faces of Solid State Devices/Disks (SSD)
- The Nand Flash Cache SSD Cash Dance (Via InfoStor)
- The Right Storage Option Is Important for Big Data Success (Via FedTech)
- Viking SATADIMM: Nand flash SATA SSD in DDR3 DIMM slot?
- WD buys nand flash SSD storage I/O cache vendor Virident (Via VMware Communities)
- What is the best kind of IO? The one you do not have to do
- When and Where to Use NAND Flash SSD for Virtual Servers (Via TheVirtualizationPractice)
- Why SSD based arrays and storage appliances can be a good idea (Part I)
- Why SSD based arrays and storage appliances can be a good idea (Part II)
- Q&A on Access data more efficiently with automated storage tiering and flash (Via SearchSolidStateStorage)
- InfoStor: Flash Data Storage: Myth vs. Reality (Via InfoStor)
- Enterprise Storage Forum: Not Just a Flash in the Pan (Via EnterpriseStorageForum)
SSD Storage I/O and related technologies comments in the news
The following are some of my commentary and industry trend perspectives that appear in various global venues.
- Comments on using Flash Drives To Boost Performance (Via Processor)
- Comments on selecting the Right Type, Amount & Location of Flash Storage (Via Toms It Pro)
- Comments Google vs. AWS SSD: Which is the better deal? (Via SearchAWS)
- Tech News World: SANdisk SSD comments and perspectives.
- Tech News World: Samsung Jumbo SSD drives perspectives
- Comments on Why Degaussing Isn’t Always Effective (Via StateTech Magazine)
- Processor: SSD (FLASH and RAM)
- SearchStorage: FLASH and SSD Storage
- Internet News: Steve Wozniak joining SSD startup
- Internet News: SANdisk sale to Toshiba
- SearchSMBStorage: Comments on SanDisk and wireless storage product
- StorageAcceleration: Comments on When VDI Hits a Storage Roadblock and SSD
- Statetechmagazine: Boosting performance with SSD
- Edtechmagazine: Driving toward SSDs
- SearchStorage: Seagate SLC and MLC flash SSD
- SearchWindowServer: Making the move to SSD in a SAN/NAS
- SearchSolidStateStorage: Comments SSD marketplace
- InfoStor: Comments on SSD approaches and opportunities
- SearchSMBStorage: Solid State Devices (SSD) benefits
- SearchSolidState: Comments on Fusion-IO flash SSD and API’s
- SeaarchSolidStateStorage: Comments on SSD industry activity and OCZ bankruptcy
- Processor: Comments on Plan Your Storage Future including SSD
- Processor: Comments on Incorporate SSDs Into Your Storage Plan
- Digistor: Comments on SSD and flash storage
- ITbusinessEdge: Comments on flash SSD and hybrid storage environments
- SearchStorage: Perspectives on Cisco buying SSD storage vendor Whiptail
- StateTechMagazine: Comments on all flash SSD storage arrays
- Processor: Comments on choosing SSDs for your data center needs
- Searchsolidstatestorage: Comments on how to add solid state devices (SSD) to your storage system
- Networkcomputing: Comments on SSD/Hard Disk Hybrids Bridge Storage Divide
- Internet Evolution: Comments on IBM buying flash SSD vendor TMS
- ITKE: Comments on IBM buying flash SSD vendor TMS
- Searchsolidstatestorage: SSD, Green IT and economic benefits
- IT World Canada: Cloud computing, dot be scared, look before you leap
- SearchStorage: SSD in storage systems
- SearchStorage: SAS SSD
- SearchSolidStateStorage: Comments on Access data more efficiently with automated storage tiering and flash
- InfoStor: Comments on EMC’s Light to Speed: Flash, VNX, and Software-Defined
- EnterpriseStorageForum: Cloud Storage Mergers and Acquisitions: What’s Going On?
Check out the Server StorageIO NVM Express (NVMe) focus page aka www.thenvmeplace.com for additional related content. nterested in data protection, check out the data protection diaries series of posts here, or cloud and object storage here, and server storage I/O performance benchmarking here. Also check out the StorageIO events and activities page here, as well as tips and articles here, news commentary here, along out newsletter here.
Ok, nuff said (for now)
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved