This is the second in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.
The many different faces or facets of NVMe configurations
NVMe can be deployed and used in many ways, the following are some examples to show you its flexibility today as well as where it may be headed in the future. An initial deployment scenario is NVMe devices (e.g. PCIe cards, M2 or 8639 drives) installed as storage in servers or as back-end storage in storage systems. Figure 2 below shows a networked storage system or appliance that uses traditional server storage I/O interfaces and protocols for front-end access, however with back-end storage being all NVMe, or a hybrid of NVMe, SAS and SATA devices.
Figure 2 NVMe as back-end server storage I/O interface to NVM storage
A variation of the above is using NVMe for shared direct attached storage (DAS) such as the EMC DSSD D5. In the following scenario (figure 3), multiple servers in a rack or cabinet configuration have an extended PCIe connection that attached to a shared storage all flash array using NVMe on the front-end. Read more about this approach and EMC DSSD D5 here or click on the image below.
Figure 3 Shared DAS All Flash NVM Storage using NVMe (e.g. EMC DSSD D5)
Next up in figure 4 is a variation of the previous example, except NVMe is implemented over an RDMA (Remote Direct Memory Access) based fabric network using Converged 10GbE/40GbE or InfiniBand in what is known as RoCE (RDMA over Converged Ethernet pronounced Rocky).
Figure 4 NVMe as a “front-end” interface for servers or storage systems/appliances
Watch for more topology and configuration options as NVMe along with associated hardware, software and I/O networking tools and technologies emerge over time.
Continue reading about NVMe with Part III (Need for Performance Speed) in this five-part series, or jump to Part I, Part IV or Part V.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
This is the first in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.
What is NVM Express (NVMe)
Non-Volatile Memory (NVM) includes persistent memory such as NAND flash and other forms Solid State Devices (SSD). NVM express (NVMe) is a new server storage I/P protocol alternative to AHCI/SATA and the SCSI protocol used by Serial Attached SCSI (SAS). Note that the name NVMe is owned and managed by the industry trade group for NVM Express is (www.nvmexpress.org).
The key question with NVMe is not if, rather when, where, why, how and with what will it appear in your data center or server storage I/O data infrastructure. This is a companion to material that I have on my micro site www.thenvmeplace.com that provides an overview of NVMe, as well as helps to discuss some of the questions about NVMe.
Main features of NVMe include among others:
Lower latency due to improve drivers and increased queues (and queue sizes)
Lower CPU used to handler larger number of I/Os (more CPU available for useful work)
Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
Bandwidth improvements leveraging various fast PCIe interface and available lanes
Dual-pathing of devices like what is available with dual-path SAS devices
Unlock the value of more cores per processor socket and software threads (productivity)
Various packaging options, deployment scenarios and configuration options
Appears as a standard storage device on most operating systems
Plug-play with in-box drivers on many popular operating systems and hypervisors
Why NVMe for Server Storage I/O? NVMe has been designed from the ground up for accessing fast storage including flash SSD leveraging PCI Express (PCIe). The benefits include lower latency, improved concurrency, increased performance and the ability to unleash a lot more of the potential of modern multi-core modern processors.
Figure 1 shows common server I/O connectivity including PCIe, SAS, SATA and NVMe.
NVMe, leveraging PCIe, enables modern applications to reach their full potential. NVMe is one of those rare, generational protocol upgrades that comes around every couple of decades to help unlock the full performance value of servers and storage. NVMe does need new drivers, but once in place, it plugs and plays seamlessly with existing tools, software and user experiences. Likewise many of those drivers are now in the box (e.g. ship with) for popular operating systems and hypervisors.
While SATA and SAS provided enough bandwidth for HDDs and some SSD uses, more performance is needed. NVMe near-term does not replace SAS or SATA they can and will coexist for years to come enabling different tiers of server storage I/O performance.
NVMe unlocks the potential of flash-based storage by allowing up to 65,536 (64K) queues each with 64K commands per queue. SATA allowed for only one command queue capable of holding 32 commands per queue and SAS supports a queue with 64K command entries. As a result, the storage IO capabilities of flash can now be fed across PCIe much faster to enable modern multi-core processors to complete more useful work in less time.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Even with an extra day during the month of February, there was a lot going on in a short amount of time. This included industry activity from servers to storage and I/O networking, hardware, software, services, mergers and acquisitions for cloud, virtual, containers and legacy environments. Check out the sampling of some of the various industry activities below.
Meanwhile, its now time for March Madness which also means metrics that matter and getting ready for World Backup Data on March 31st. Speaking of World Backup Day, check out the StorageIO events and activities page for a webinar on March 31st involving data protection as part of smart backups.
While your focus for March may be around brackets and other related themes, check out the Carnegie Mellon University (CMU) white paper listed below that looks at NAND flash SSD failures at Facebook. Some of the takeaways involve the importance of cooling and thermal management for flash, as well as wear management and role of flash translation layer firmware along with controllers.
Also see the links to the Google White Paper on their request to the industry for a new type of Hard Disk Drive (HDD) to store capacity data while SSD’s handle the IOP’s. The take away is that while Google uses a lot of flash SSD for high performance, low latency workloads, they also need to have a lot of high-capacity bulk storage that is more affordable on a cost per capacity basis. Google also makes several proposals and suggestions to the industry on what should and can be done on a go forward basis.
Backblaze also has a new report out on their 2015 HDD reliability and failure analysis which makes for an interesting read. One of the take away’s is that while there are newer, larger capacity 6TB and 8TB drives, Backblaze is leveraging the lower cost per capacity of 4TB drives that are also available in volume quantity.
Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.
Recent and popular Server StorageIOblog posts include:
EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I and Part II – EMC DSSD D5 Direct Attached Shared AFA EMC announced the general availability of their DSSD D5 Shared Direct Attached SSD (DAS) flash storage system (e.g. All Flash Array or AFA) which is a rack-scale solution. If you recall, EMC acquired DSSD back in 2014 which you can read more about here. EMC announced four configurations that include 36TB, 72TB and 144TB raw flash SSD capacity with support for up to 48 dual-ported host client servers.
Various Hardware (SAS, SATA, NVM, M2) and Software (VHD) Defined Odd’s and Ends Ever need to add another GbE port to a small server, workstation or perhaps Intel NUC, however no PCIe slots are available? How about attaching a M2 form factor flash SSD card to a server or device that does not have an M2 port, or, for mirroring two M2 cards together with a RAID adapter? Looking for tool to convert a Windows system to a Virtual Hard Disk (VHD) while it is running? The following are a collection of odd’s and end’s devices and tools for hardware and software defining your environment.
Software Defined Storage Virtual Hard Disk (VHD) Algorithms + Data Structures For those who are into, or simply like to talk about software defined storage (SDS), API’s, Windows, Virtual Hard Disks (VHD) or VHDX, or Hyper-V among other related themes, have you ever actually looked at the specification for VHDX? If not, here is the link to the open specification that Microsoft published (this one dates back to 2012).
Big Files and Lots of Little File Processing and Benchmarking with Vdbench Need to test a server, storage I/O networking, hardware, software, services, cloud, virtual, physical or other environment that is either doing some form of file processing, or, that you simply want to have some extra workload running in the background for what ever reason?
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I
This is the first post in a two-part series pertaining to the EMC DSSD D5 announcement, you can read part two here.
EMC announced today the general availability of their DSSD D5 Shared Direct Attached SSD (DAS) flash storage system (e.g. All Flash Array or AFA) which is a rack-scale solution. If you recall, EMC acquired DSSD back in 2014 which you can read more about here. EMC announced four configurations that include 36TB, 72TB and 144TB raw flash SSD capacity with support for up to 48 dual-ported host client servers.
Via EMC Pulse Blog
What Is DSSD D5
At a high level EMC DSSD D5 is a PCIe direct attached SSD flash storage solution to enable aggregation of disparate SSD card functionality typically found in separate servers into a shared system without causing aggravation. DSSD D5 helps to alleviate server side I/O bottlenecks or aggravation issues that can be the result of aggregation of workloads or data. Think of DSSD D5 as an shared application server storage I/O accelerator for up to 48 servers to access up to 144TB of raw flash SSD to support various applications that have the need for speed.
Applications that have the need for speed or that can benefit from less time waiting for results, where time is money, or boosting productivity can enable high profitability computing. This includes legacy as well as emerging applications and workloads spanning little data, big data and big fast structure and unstructured data. From Oracle to SAS to HBASE and Hadoop among others, perhaps even Alluxio.
Some examples include:
Clusters and scale-out grids
High Performance COMpute (HPC)
Parallel file systems
Forecasting and image processing
Fraud detection and prevention
Research and analytics
E-commerce and retail
Search and advertising
Legacy applications
Emerging applications
Structured database and key-value repositories
Unstructured file systems, HDFS and other data
Large undefined work sets
From batch stream to real-time
Reduces run times from days to hours
Where to learn more
Continue reading with the following links about NVMe, flash SSD and EMC DSSD.
Learn more about flash SSD here and NVMe here at thenvmeplace.com
What this all means
Today’s legacy, and emerging applications have the need for speed, and where the applications may not need speed, the users as well as Internet of Things (IoT) that depend upon, or feed those applications do need things to move faster. Fast applications need fast software and hardware to get the same amount of work done faster with less wait delays, as well as process larger amounts of structured and unstructured little data, big data and very fast big data.
Different applications along with the data infrastructures they rely upon including servers, storage, I/O hardware and software need to adapt to various environments, one size, one approach model does not fit all scenarios. What this means is that some applications and data infrastructures will benefit from shared direct attached SSD storage such as rack scale solutions using EMC DSSD D5. Meanwhile other applications will benefit from AFA or hybrid storage systems along with other approaches used in various ways.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved
Various Hardware (SAS, SATA, NVM, M2) and Software (VHD) Defined Odd’s and Ends
Ever need to add another GbE port to a small server, workstation or perhaps Intel NUC, however no PCIe slots are available? How about attaching a M2 form factor flash SSD card to a server or device that does not have an M2 port, or, for mirroring two M2 cards together with a RAID adapter? Looking for tool to convert a Windows system to a Virtual Hard Disk (VHD) while it is running? The following are a collection of odd’s and end’s devices and tools for hardware and software defining your environment.
Adding GbE Ports Without PCIe Ports
Adding Ethernet ports or NICs is relatively easy with larger servers, assuming you have available PCIe slots.
However what about when you are limited or out of PCIe ports? One option is to use USB (preferably USB 3) to GbE connectors. Another option is if you have an available mSATA card slot, such as on a server or workstation that had a WiFi card you no longer need to use, is get a mSATA to GbE kit (shown below). Granted you might have to get creative with the PCIe bracket depending on what you are going to put one of these into.
Left mSATA to GbE port, Right USB 3 (Blue) to GbE connector
Tip: Some hypervisors may not like the USB to GbE, or have drivers for the mSATA to GbE connector, likewise some operating systems do not have in the box drivers. Start by loading GbE drivers such as those needed for RealTek NICs and you may end up with plug and play.
SAS to SATA Interposer and M2 to SATA docking card
In the following figure on the left is a SAS to SATA interposer which enables a SAS HDD or SSD to connect to a SATA connector (power and data). Keep in mind that SATA devices can attach to SAS ports, however the usual rule of thumb is that SAS devices can not attach to a SATA port or controller. To prevent that from occurring, the SAS and SATA connectors have different notched connectors that prevent a SAS device from plugging into a SATA connector.
Where the SAS to SATA interposers come into play is that some servers or systems have SAS controllers, however their drive bays have SATA power and data connectors. Note that the key here is that there is a SAS controller, however instead of a SAS connector to the drive bay, a SATA connector is used. To get around this, interposers such as the one above allows the SAS device to attach to the SATA connector which in turn attached to the SAS controller.
Left SAS to SATA interposer, Right M2 to SATA docking card
In the above figure on the right, is an M2 NVM nand flash SSD card attached to a M2 to SATA docking card. This enables M2 cards that have SATA protocol controllers (as opposed to M2 NVMe) to be attached to a SATA port on an adapter or RAID card. Some of these docking cards can also be mounted in server or storage system 2.5" (or larger) drive bays. You can find both of the above at Amazon.com as well as many other venues.
P2V and Creating VHD and VHDX
I like and use various Physical to Virtual (P2V) as well as Virtual to Virtual (V2V) and even Virtual to Physical (V2P) along with Virtual to Cloud (V2C) tools including those from VMware (vCenter Converter), Microsoft (e.g. Microsoft Virtual Machine Converter) among others. Likewise Clonezilla, Acronis and many other tools are in the toolbox. One of those other tools that is handy for relatively quickly making a VHD or VHDX out of a running Windows server is disk2vhd.
Now you should ask, why not just use the Microsoft Migration tool or VMware converter?
Simple, if you use those or other tools and run into issues with GPT vs MBR or BIOS vs UEFI settings among others, disk2vhd is a handy work around. Simply install it, tell it where to create the VHD or VHDX (preferably on another device), start the creation, when done, move the VHDX or VHD to where needed and go from there.
Where do you get disk2vhd and how much does it cost?
Get it here from Microsoft Technet Windows Sysinternals page and its free.
Where to learn more
Continue reading about the above and other related topics with these links.
While the above odd’s and end’s tips, tricks, tools and technology may not be applicable for your production environment, perhaps they will be useful for your test or home lab environment needs. On the other hand, the above may not be practically useful for anything, yet simply entertaining, the rest is up to you as if there is any return on investment, or, perhaps return on innovation from use these or other odd’s, end’s tips and tricks that might be outside of the traditional box so to speak.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved
Welcome to NVMe place NVM Non Volatile Memory Express Resources. NVMe place is about Non Volatile Memory (NVM) Express (NVMe) with Industry Trends Perspectives, Tips, Tools, Techniques, Technologies, News and other information.
Disclaimer
Please note that this NVMe place resources site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks.
Image used with permission of NVM Express, Inc.
Visit the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.
The NVMe Place resources and NVM including SCM, PMEM, Flash
NVMe place includes Non Volatile Memory (NVM) including nand flash, storage class memories (SCM), persistent memories (PM) are storage memory mediums while NVM Express (NVMe) is an interface for accessing NVM. This NVMe resources page is a companion to The SSD Place which has a broader Non Volatile Memory (NVM) focus including flash among other SSD topics. NVMe is a new server storage I/O access method and protocol for fast access to NVM based storage and memory technologies. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.
Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.
Leveraging the standard PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5″ drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add-in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially, NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.
NVMe as a “back-end” I/O interface for NVM storage media
NVMe as a “front-end” interface for servers or storage systems/appliances
NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can also being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.
NVMe features
Main features of NVMe include among others:
Lower latency due to improve drivers and increased queues (and queue sizes)
Lower CPU used to handle larger number of I/Os (more CPU available for useful work)
Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
Bandwidth improvements leveraging various fast PCIe interface and available lanes
Dual-pathing of devices like what is available with dual-path SAS devices
Unlock the value of more cores per processor socket and software threads (productivity)
Various packaging options, deployment scenarios and configuration options
Appears as a standard storage device on most operating systems
Plug-play with in-box drivers on many popular operating systems and hypervisors
NVMe and shared PCIe (e.g. shared PCIe flash DAS)
NVMe related content and links
The following are some of my tips, articles, blog posts, presentations and other content, along with material from others pertaining to NVMe. Keep in mind that the question should not be if NVMe is in your future, rather when, where, with what, from whom and how much of it will be used as well as how it will be used.
MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips)– PDF
Non-Volatile Memory (NVM) Express (NVMe) continues to evolve as a technology for enabling and improving server storage I/O for NVM including nand flash SSD storage. NVMe streamline performance enabling more work to be done (e.g. IOPs), data to be moved (bandwidth) at a lower response time using less CPU.
The above figure is a quick look comparing nand flash SSD being accessed via SATA III (6Gbps) on the left and NVMe (x4) on the right. As with any server storage I/O performance comparisons there are many variables and take them with a grain of salt. While IOPs and bandwidth are often discussed, keep in mind that with the new protocol, drivers and device controllers with NVMe that streamline I/O less CPU is needed.
Additional NVMe Resources
Also check out the Server StorageIO companion micro sites landing pages including thessdplace.com (SSD focus), data protection diaries (backup, BC/DR/HA and related topics), cloud and object storage, and server storage I/O performance and benchmarking here.
If you are in to the real bits and bytes details such as at device driver level content check out the Linux NVMe reflector forum. The linux-nvme forum is a good source if you are developer to stay up on what is happening in and around device driver and associated topics.
Disclaimer: Please note that this site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks. Check out the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.
Image used with permission of NVM Express, Inc.
Wrap Up
Watch for updates with more content, links and NVMe resources to be added here soon.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Hello and welcome to this August 2015 Server StorageIO update newsletter. Summer is wrapping up here in the northern hemisphere which means the fall conference season has started, holidays in progress as well as getting ready for back to school time. I have been spending my summer working on various things involving servers, storage, I/O networking hardware, software, services from cloud to containers, virtual and physical. This includes OpenStack, VMware vCloud Air, AWS, Microsoft Azure, GCS among others, as well as new versions of Microsoft Windows and Servers, Non Volatile Memory (NVM) including flash SSD, NVM Express (NVMe), databases, data protection, software defined, cache, micro-tiering and benchmarking using various tools among other things (some are still under wraps).
Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.
Feature Topic – Non Volatile Memory including NAND flash SSD
Via Intel: Click above image to view history of memory
This months feature topic theme is Non Volatile Memory (NVM) which includes technologies such as NAND flash commonly used in Solid State Devices (SSDs) storage today, as well as in USB thumb drive, mobile and hand-held devices among many other uses. NVM spans servers, storage, I/O devices along with mobile and handheld among many other technologies. In addition to NAND flash, other forms of NVM include Non Volatile Random Access Memory (NVRAM), Read Only Memory (ROM) along with some emerging new technologies including the recently announced Intel and Micron 3D XPoint among others.
NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
Spot The Newest & Best Server Trends (Via Processor)
Market ripe for embedded flash storage as prices drop (Via Powermore (Dell))
Continue reading more about NVM, NVMe, NAND flash, SSD Server and storage I/O related topics at www.thessdplace.com as well as about I/O performance, monitoring and benchmarking tools at www.storageperformance.us.
Recent Server StorageIO articles appearing in different venues include:
IronMountain: Information Lifecycle Management: Which Data Types Have Value? It’s important to keep in mind that on a fundamental level, there are three types of data: information that has value, information that does not have value and information that has unknown value. Data value can be measured along performance, availability, capacity and economic attributes, which define how the data gets managed across different tiers of storage. In general data can have value, unknown value or no value. Read more here.
EnterpriseStorageForum: Is Future Storage Converging Around Hyper-Converged? Depending on who you talk or listen to, hyper-converged storage is either the future of storage, or it is a hype niche market that is not for everybody, particular not larger environments. How converged is the hyper-converged market? There are many environments that can leverage CI along with HCI, CiB or other bundles solutions. Granted, not all of those environments will converge around the same CI, CiB and HCI or pod solution bundles as everything is not the same in most IT environments and data centers. Not all markets, environments or solutions are the same. Read more here.
Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here
Enmotus FuzeDrive provides micro-tiering boosting performance (reads and writes) of storage attached to physical bare metal servers, virtual and cloud instances including Windows and Linux operating systems across various applications. In the simple example above five separate SQL Server databases (260GB each) were placed on a single 6TB HDD. A TPCC workload was run concurrently against all databases with various numbers of users. One workload used a single 6TB HDD (blue) while the other used a FuzeDrive (green) comprised of a 6TB HDD and a 400GB SSD showing basic micro-tiering improvements.
The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of my books.
While not a technology book, you do not have to be at or near retirement age to be planning for retirement. Some of you may already be at or near retirement age, for others, its time to start planning or refining your plans. A friend recommended this book and I’m recommending it to others. Its pretty straight forward and you might be surprised how much money people may be leaving on the table! Check it out here at Amazon.com.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates
I attended the Flash Memory Summit in Santa Clara CA last week and not surprisingly there were many announcements about Non-Volatile Memory (NVM) along with related enabling technologies. Some of these announcements were component based intended for original equipment manufactures (OEMs) ranging from startup to established, systems integrators (SI), value added resellers (VAR’s) while others were more customer solution focused. From a customer solution focus, some of the technologies were consumer oriented while others for business and some for cloud scale service providers.
Recent NVM, NVMe and Flash SSD news
A sampling of some recent NVM, NVMe and Flash related news includes among others:
New SATA SSD powers elastic cloud agility for CSPs (Via Cbronline)
Toshiba Solid-State Drive Family Features PCIe Technology (Via Eweek)
SanDisk aims CloudSpeed Ultra SSD at cloud providers (Via ITwire)
Everspin & Aupera show all-MRAM Storage Module in M.2 Form Factor (Via BusinessWire)
Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage (part I, part II and part III)
PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
Seagate Grows Its Nytro Enterprise Flash Storage Line (Via InfoStor)
New SAS Solid State Drive First Product From Seagate Micron Alliance (Via Seagate)
Wow, Samsung’s New 16 Terabyte SSD Is the World’s Largest Hard Drive (Via Gizmodo)
Samsung ups the SSD ante with faster, higher capacity drives (Via ITworld)
NVMe primer
Via Intel: Click above image to view history of memory via Intel site
NVM includes technologies such as NAND flash commonly used in Solid State Devices (SSD’s) storage today, as well as in USB thumb drive, mobile and hand-held devices among many other uses. NVM spans servers, storage, I/O devices along with mobile and handheld among many other technologies. In addition to NAND flash, other forms of NVM include Non Volatile Random Access Memory (NVRAM), Read Only Memory (ROM) along with some emerging new technologies including the recently announced Intel and Micron 3D XPoint among others.
Server Storage I/O memory (and storage) hierarchy
Keep in mind that memory is storage and storage is persistent memory as well as that there are different classes, categories and tiers of memory and storage as shown above to meet various performance, availability, capacity and economic requirements. Besides NVM ranging from flash to NVRAM to emerging 3D XPoint among others, another popular topic that is gaining momentum is NVM Express (NVMe). NVMe (more material here at www.thenvmeplace.com) is a new server storage I/O access method and protocol for fast access to NVM based products. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.
Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.
Leveraging the common PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5" drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.
NVMe as a "back-end" I/O interface in a server or storage system accessing NVM storage/media devices
NVMe as a “front-end” interface for servers (or storage systems/appliances) to use NVMe based storage systems
NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can in addition to being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.
NVMe and shared PCIe
NVMe features
Main features of NVMe include among others:
Lower latency due to improve drivers and increased queues (and queue sizes)
Lower CPU used to handler larger number of I/Os (more CPU available for useful work)
Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
Bandwidth improvements leveraging various fast PCIe interface and available lanes
Dual-pathing of devices like what is available with dual-path SAS devices
Unlock the value of more cores per processor socket and software threads (productivity)
Various packaging options, deployment scenarios and configuration options
Appears as a standard storage device on most operating systems
Plug-play with in-box drivers on many popular operating systems and hypervisors
MSP CMG, September 2014 Presentation (Flash back to reality – Myths and Realities Flash and SSD Industry trends perspectives plus benchmarking tips) – PDF
Spot The Newest & Best Server Trends (Via Processor)
Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage (part I, part II and part III)
Market ripe for embedded flash storage as prices drop (Via Powermore (Dell))
Continue reading more about NVM, NVMe, NAND flash, SSD Server and storage I/O related topics at www.thessdplace.com as well as about I/O performance, monitoring and benchmarking tools at www.storageperformance.us.
What this all means and wrap up
The question is not if NVM is in your future, it is! Instead the questions are what type of NVM including NAND flash among other mediums will be deployed where, using what type of packaging or solutions (drives, cards, systems, appliances, cloud) for what role (as storage, primary memory, persistent cache) along with how much among others. For some environments the solution is already, or will be All NVM Arrays (ANA) or All Flash Arrays (AFA) or All SSD Arrays (ASA) while for others the home run will be hybrid based solutions that work for you, fitting in and adapting to your environment as it changes.
Also keep in mind that a little bit of fast memory including NVM based flash among others in the right place can have a big benefit. My experiences using NVMe to use flash enabled NVMe devices on Windows and Linux systems is that you can see lower response times at higher-IOP’s however also with lower CPU consumption particular when compared to 6Gbps SATA. Likewise bandwidth can easily be pushed to the limits of the NVMe device as well as PCIe interface being used such as x4 or x8 depending on implementation. That is also a warning and something to watch out for comparing apples to oranges in that while NVMe uses PCIe, understand when looking at different results if those are for x4 or x8 or faster PCIe as their mere presence of using PCIe does not mean you are running at full potential.
Keep an eye on NVMe as a new high-speed, low-latency server storage I/O access protocol for unlocking the full performance capabilities of fast NVM based storage as well as leveraging the multiple cores in today’s fast processors. Does this mean AHCI/SATA or SCSI/SAS are now dead? Some will claim that, however at least near-term for next few years (if not longer), those interfaces will continue to be used where they make sense, as well as where they can save dollars specifically for cost sensitive, high-capacity environments that do not need the full performance of NVMe just yet.
As for the Flash Memory Summit event in Santa Clara, that was a good day with time well spent in briefings, meetings, demo’s and add hoc discussions on the expo floor.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Some August 2015 Amazon Web Services (AWS) and Microsoft Azure Cloud Updates
Cloud Services Providers continue to extend their feature, function and capabilities and the following are two examples. Being a customer of both Amazon Web Services (AWS) as well as Microsoft Azure (among others), I receive monthly news updates about service improvements along with new features. Here are a couple of examples involving recent updates from AWS and Azure.
Azure enhancements
Azure Premium Storage generally available in Japan East
Solid State Device (SSD) based Azure Premium Storage is now available in Japan East region. Add up to 32 TB and more than 64,000 IOPs (read operations) per virtual machine with Azure Premium Storage. Learn more about Azure storage and pricing here.
Azure Data Factory generally available
Data Factory is a cloud based data integration service for automated management as well as movement and transformation of data, learn more and view pricing options here.
AWS Partner Updates
Recent Amazon Web Services (AWS) customer update included the following pertaining to partner storage solutions.
Learn more about AWS Partner Network (APN) here or click on the above image.
Primary Cloud File and NAS storage complementing on-premises (e.g. your local) storage
Avere
Ctera
NetApp (Cloud OnTap)
Panzura
SoftNAS
Zadara
Secure File Transfer
Aspera
Signiant
Note that the above are those listed on the AWS Storage Partner Page as of this being published and subject to change. Likewise other solutions that are not part of the AWS partner program may not be listed.
How do primary storage clouds and cloud for backup differ?
What’s most important to know about my cloud privacy policy?
What this all means and wrap up
Cloud Service Providers (CSP) continue to enhance their capabilities, as well as their footprints as part of growth. In addition to technology, tools and number of regions, sites and data centers, the CSPs are also expanding their partner networks both about how many partners, also in the scope of those partnerships. Some of these partnerships are in the scope of the cloud as a destination, others are for enabling hybrid where public clouds become an extension complementing traditional IT. Everything is not the same in most environments and one type of cloud approach does not have to suit or fit all needs, hence the value of hybrid cloud deployment and usage.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Intel Micron 3D XPoint server storage NVM SCM PM SSD.
This is the second of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part III here.
Is this 3D XPoint marketing, manufacturing or material technology?
You can’t have a successful manufactured material technology without some marketing, likewise marketing without some manufactured material would be manufactured marketing. In the case of 3D XPoint and its announcement launch, their real technology shown, granted it was only wafer and dies as opposed to an actual DDR4 DIMM or PCIe Add In Card (AIC) or drive form factor Solid State Device (SSD) product. On the other hand, on a relative comparison basis, even though there is marketing collateral available to learn more from, this was far from a over the big-top made for TV or web circus event, which can be a good thing.
Wafer unveiled containing 3D XPoint 128 Gb dies
Who will get access to 3D XPoint?
Initially 3D XPoint production capacity supply will be for the two companies to offer early samples to their customers later this year with general production slated for 2016 meaning early real customer deployed products starting sometime in 2016.
Is it NAND or NOT?
3D XPoint is not NAND flash, it is also not NVRAM or DRAM, it’s a new class of NVM that can be used for server class main memory with persistency, or as persistent data storage among other uses (cell phones, automobiles, appliances and other electronics). In addition, 3D XPoint is more durable with a longer useful life for writing and storing data vs. NAND flash.
Why is 3D XPoint important?
As mentioned during the Intel and Micron announcement, there have only been seven major memory technologies introduced since the transistor back in 1947, granted there have been many variations along with generational enhancements of those. Thus 3D XPoint is being positioned by Intel and Micron as the eighth memory class joining its predecessors many of which continue to be used today in various roles.
Major memory classes or categories timeline
In addition to the above memory classes or categories timeline, the following shows in more detail various memory categories (click on the image below to get access to the Intel interactive infographic).
Initially the 3D XPoint technology is available in a 2 layer 128 bit (cell) per die capacity. Keep in mind that there are usually 8 bits to a byte resulting in 16 GByte capacity per chip initially. With density improvements, as well as increased stacking of layers, the number of cells or bits per die (e.g. what makes up a chip) should improve, as well as most implementations will have multiple chips in some type of configuration.
What will 3D XPoint cost?
During the 3D XPoint launch webinar Intel and Micron hinted that first pricing will be between current DRAM and NAND flash on a per cell or bit basis, however real pricing and costs will vary depending on how packaged for use. For example if placed on a DDR4 or different type of DIMM or on a PCIe Add In Card (AIC) or as a drive form factor SSD among other options will vary the real price. Likewise as with other memories and storage mediums, as production yields and volumes increase, along with denser designs, the cost per usable cell or bit can be expected to further improve.
Where to read, watch and learn more
Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) ( Part I)
Part II – Intel and Micron new 3D XPoint server and storage NVM
Part III – 3D XPoint new server storage memory from Intel and Micron
Intel and Micron ( Media Room, links, videos, images and more including B roll videos)
DRAM which has been around for sometime has plenty of life left for many applications as does NAND flash including new 3D NAND, vNAND and other variations. For the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies including 3D XPoint. Read more in this series including Part I here and Part III here.
Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Part III – 3D XPoint server storage class memory SCM
Updated 1/31/2018
3D XPoint nvm pm scm storage class memory.
This is the third of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part II here.
What is 3D XPoint and how does it work?
3D XPoint is a new class or class of memory (view other categories of memory here) that provides performance for reads and writes closer to that of DRAM with about 10x the capacity density. In addition to the speed closer to DRAM vs. the lower NAND flash, 3D XPoint is also non-volatile memory (NVM) like NAND flash, NVRAM and others. What this means is that 3D XPoint can be used as persistent higher density fast server memory (or main memory for other computers and electronics). Besides being fast persistent main memory, 3D XPoint will also be a faster medium for solid state devices (SSD’s) including PCIe Add In Cards (AIC), m2 cards and drive form factor 8637/8639 NVM Express (NVMe) accessed devices that also has better endurance or life span compared to NAND flash.
3D XPoint architecture and attributes
The initial die or basic chip building block 3D XPoint implementation is a layer 128 Gbit device which if using 8 bits would yield 16GB raw. Over time increased densities should become available as the bit density improves with more cells and further scaling of the technology, combined with packaging. For example while a current die could hold up to 16 GBytes of data, multiple dies could be packaged together to create a 32GB, 64GB, 128GB etc. or larger actual product. Think about not only where packaged flash based SSD capacities are today, also think in terms of where DDR3 and DDR4 DIMM are at such as 4GB, 8GB, 16GB, 32GB densities.
The 3D aspect comes from the memory being in a matrix initially being two layers high, with multiple rows and columns that intersect, where those intersections occur is a microscopic material based switch for accessing a particular memory cell. Unlike NAND flash where an individual cell or bit is accessed as part of a larger block or page comprising several thousand bytes at once, 3D XPoint cells or bits can be individually accessed to speed up reads and writes in a more granular fashion. It is this more granular access along with performance that will enable 3D XPoint to be used in lower latency scenarios where DRAM would normally be used.
Instead of trapping electrons in a cell to create a bit of capacity (e.g. on or off) like NAND flash, 3D XPoint leverages the underlying physical material propertied to store a bit as a phase change enabling use of all cells. In other words, instead of being electron based, it is material based. While Intel and Micron did not specify what the actual chemistry and physical materials that are used in 3D XPoint, they did discuss some of the characteristics. If you want to go deep, check out how the Dailytech makes an interesting educated speculation or thesis on the underlying technology.
Watch the following video to get a better idea and visually see how 3D XPoint works.
Left many dies on a wafer, right, a closer look at the dies cut from the wafer
Dies (here and here) are the basic building block of what goes into the chips that in turn are the components used for creating DDR DIMM for main computer memory, as well as for create SD and MicroSD cards, USB thumb drives, PCIe AIC and drive form factor SSD, as well as custom modules on motherboards, or consumption via bare die and wafer level consumption (e.g. where you are doing really custom things at volume, beyond using a soldering iron scale).
Has Intel and Micron cornered the NVM and memory market?
We have heard proclamations, speculation and statements of the demise of DRAM, NAND flash and other volatile and NVM memories for years, if not decades now. Each year there is the usual this will be the year of “x” where “x” can include among others. Resistive RAM aka ReRAM or RRAM aka the memristor that HP earlier announced they were going to bring to market and then earlier this year canceling those plans while Crossbar continues to pursue RRAM. MRAM or Magnetorestive RAM, Phase Change Memory aka CRAM or PCM and PRAM, FRAM aka FeRAM or Ferroelectric RAM among others.
Expanding persistent memory and SSD storage markets
Keep in mind that there are many steps taking time measured in years or decades to go from research and development lab idea to prototype that can then be produced at production volumes in economic yields. As a reference for, there is still plenty of life in both DRAM as well as NAND flash, the later having appeared around 1989.
Technology industry adoption precedes customer adoption and deployment
There is a difference between industry adoption and deployment vs. customer adoption and deployment, they are related, yet separated by time as shown in the above figure. What this means is that there can be several years from the time a new technology is initially introduced and when it becomes generally available. Keep in mind that NAND flash has yet to reach its full market potential despite having made significant inroads the past few years since it was introduced in 1989.
This begs the question of if 3D XPoint is a variation of phase change, RRAM, MRAM or something else. Over at the Dailytech they lay out a line of thinking (or educated speculation) that 3D XPoint is some derivative or variation of phase change, time will tell about what it really is.
What’s the difference between 3D NAND flash and 3D XPoint?
3D NAND is a form of NAND flash NVM, while 3D XPoint is a completely new and different type of NVM (e.g. its not NAND).
3D NAND is a variation of traditional flash with the difference between vertical stacking vs. horizontal to improve density, also known as vertical NAND or V-NAND. Vertical stacking is like building up to house more tenants or occupants in a dense environment or scaling up, vs scaling-out by using up more space where density is not an issue. Note that magnetic HDD’s shifted to perpendicular (e.g. vertical) recording about ten years ago to break through the super parametric barrier and more recently, magnetic tape has also adopted perpendicular recording. Also keep in mind that 3D XPoint and the earlier announced Intel and Micron 3D NAND flash are two separate classes of memory that both just happen to have 3D in their marketing names.
Where to read, watch and learn more
Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage ( Part I)
Part II – Intel and Micron new 3D XPoint server and storage NVM
Part III – 3D XPoint new server storage memory from Intel and Micron
Intel and Micron ( Media Room, links, videos, images and more including B roll videos)
First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Same with DRAM which has been around for sometime, it too still has plenty of life left for many applications. However other applications that have the need for improved speed over NAND flash, or persistency and density vs. DRAM will be some of the first to leverage new NVM technologies such as 3D XPoint. Thus at least for the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies. Bottom line, 3D XPoint is a new class of NVM memory, can be used for persistent main server memory or for persistent fast storage memory. If you have not done so, check out Part I here and Part II here of this three-part series on Intel and Micron 3D XPoint.
Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
3D XPoint NVM persistent memory PM storage class memory SCM
Updated 1/31/2018
This is the first of a three-part series on Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage announcement. Read Part II here and Part III here.
In a webcast the other day, Intel and Micron announced new 3D XPoint non-volatile memory (NVM) that can be used for both primary main memory (e.g. what’s in computers, serves, laptops, tablets and many other things) in place of Dynamic Random Access Memory (DRAM), for persistent storage faster than today’s NAND flash-based solid state devices (SSD), not to mention future hybrid usage scenarios. Note that this announcement while having the common term 3D in it is different from the earlier Intel and Micron announcement about 3D NAND flash (read more about that here).
Data needs to be close to processing, processing needs to be close to the data (locality of reference)
Server Storage I/O memory hardware and software hierarchy along with technology tiers
What did Intel and Micron announce?
Intel SVP and General Manager Non-Volatile Memory solutions group Robert Crooke (Left) and Micron CEO D. Mark Durcan did the joint announcement presentation of 3D XPoint (webinar here). What was announced is the 3D XPoint technology jointly developed and manufactured by Intel and Micron which is a new form or category of NVM that can be used for both primary memory in servers, laptops, other computers among other uses, as well as for persistent data storage.
Robert Crooke (Left) and Mark Durcan (Right)
Summary of 3D XPoint announcement
New category of NVM memory for servers and storage
Joint development and manufacturing by Intel and Micron in Utah
Non volatile so can be used for storage or persistent server main memory
Allows NVM to scale with data, storage and processors performance
Leverages capabilities of both Intel and Micron who have collaborated in the past
Performance Intel and Micron claim up to 1000x faster vs. NAND flash
Availability persistent NVM compared to DRAM with better durability (life span) vs. NAND flash
Capacity densities about 10x better vs. traditional DRAM
Economics cost per bit between dram and nand (depending on packaging of resulting products)
What applications and products is 3D XPoint suited for?
In general, 3D XPoint should be able to be used for many of the same applications and associated products that current DRAM and NAND flash-based storage memories are used for. These range from IT and cloud or managed service provider data centers based applications and services, as well as consumer focused among many others.
3D XPoint enabling various applications
In general, applications or usage scenarios along with supporting products that can benefit from 3D XPoint include among others’. Applications that need larger amounts of main memory in a denser footprint such as in-memory databases, little and big data analytics, gaming, wave form analysis for security, copyright or other detection analysis, life sciences, high performance compute and high-productivity compute, energy, video and content severing among many others.
In addition, applications that need persistent main memory for resiliency, or to cut delays and impacts for planned or un-planned maintenance or having to wait for memories and caches to be warmed or re-populated after a server boot (or re-boot). 3D XPoint will also be useful for those applications that need faster read and write performance compared to current generations NAND flash for data storage. This means both existing and emerging applications as well as some that do not yet exist will benefit from 3D XPoint over time, like how today’s applications and others have benefited from DRAM used in Dual Inline Memory Module (DIMM) and NAND flash advances over the past several decades.
Where to read, watch and learn more
Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) ( Part I)
Part II – Intel and Micron new 3D XPoint server and storage NVM
Part III – 3D XPoint new server storage memory from Intel and Micron
Intel and Micron (Media Room, links, videos, images and more including B roll videos)
First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Continue reading Part II here and Part III here of this three-part series on Intel and Micron 3D XPoint along with more analysis and commentary.
Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
How to test your HDD SSD AFA Hybrid or cloud storage
Updated 2/14/2018
Over at BizTech Magazine I have a new article 4 Ways to Performance Test Your New HDD or SSD that provides a quick guide to verifying or learning what the speed characteristic of your new storage device are capable of.
To some the above (read the full article here) may seem like common sense tips and things everybody should know otoh there are many people who are new to servers storage I/O networking hardware software cloud virtual along with various applications, not to mention different tools.
Thus the above is a refresher for some (e.g. Dejavu) while for others it might be new and revolutionary or simply helpful. Interested in HDD’s, SSD’s as well as other server storage I/O performance along with benchmarking tools, techniques and trends check out the collection of links here (Server and Storage I/O Benchmarking and Performance Resources).
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Hello and welcome to this February 2015 Server and StorageIO update newsletter. The new year is off and running with many events already underway including the recent USENIX FAST conference and others on the docket over the next few months.
Speaking of FAST (File and Storage Technologies) event which I attended last week, here is a link to where you can download the conference proceedings.
In other events, VMware announced version 6 of their vSphere ESXi hypervisor and associated management tools including VSAN, VVOL among other items.
This months newsletter has a focus on server storage I/O performance topics with various articles, tips, commentary and blog posts.
Watch for more news, updates and industry trends perspectives coming soon.
Following are some StorageIO industry trends perspectives comments that have appeared in various print and on-line venues. Over at Processor there are comments on resilient & highly available, underutilized or unused servers, what abandoned data Is costing your company, align application needs with your infrastructure (server, storage, networking) resources.
Also at processor explore flash based (SSD) storage, enterprise backup buying tips, re-evaluate server security, new tech advancements for server upgrades, and understand cost of acquiring storage.
Meanwhile over at CyberTrend there are some perspectives on enterprise backup and better servers mean better business.
Check out this quick-read tip on storage benchmark and testing fundamentals over at BizTech. Also check out these resources and links on server storage I/O performance and benchmarking tools.
December 11, 2014 – BrightTalk Server & Storage I/O Performance
December 10, 2014 – BrightTalk Server & Storage I/O Decision Making
December 9, 2014 – BrightTalk Virtual Server and Storage Decision Making
December 3, 2014 – BrightTalk Data Protection Modernization
November 13 9AM PT – BrightTalk Software Defined Storage
Videos and Podcasts
StorageIO podcasts are also available via and at StorageIO.tv
From StorageIO Labs
Research, Reviews and Reports
StarWind Virtual SAN
Using less hardware with software defined storage management. This looks at the needs of Microsoft Hyper-V ROBO and SMB environments with software defined storage less hardware. Read more here.
Enjoy this edition of the Server and StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved