Cloud Constellation SpaceBelt – Out Of This World Cloud Data Centers?

Cloud Constellation SpaceBelt – Out Of This World Cloud Data Centers?

server storage I/O trends

A new startup called Cloud Constellation (aka SpaceBelt) has announced and proposes converge space terrestrial satellite technology with IT information and cloud related data infrastructure technologies including NVM (e.g. SSD) and storage class memory (SCM). While announcing their Series A funding and proposed value proposition (below), Cloud Constellation did not say how much funding, who the investors are, or management team leading to some, well, rather cloud information.

Cloud Constellation’s SpaceBelt transforms cybersecurity for enterprise and government operations moving high-value data around the world by:

  • insulating it completely from the Internet and terrestrial leased lines
  • liberating it from cyberattacks and surreptitious activities
  • protecting it from natural disasters and force majeure events
  • addressing all jurisdictional complexities and constraints
  • avoiding risks of violating privacy regulations

Truly secure data transfer: Enterprises and governments will finally be enabled to bypass use of leaky networks and compromised servers interconnecting their sites around the world.

New option for cloud service providers: The service will be a key market differentiator for cloud service providers to offer a transformative, ultra-high degree of network security to clients reliant on moving sensitive, mission-critical data around the world each day.

What is SpaceBelt Cloud Constellation?

From their website www.cloudconstellation.com you will see following.

Cloud Constellation Space Belt
www.cloudconstellation.com

Keeping in mind that today is April 1st which means April Fools day 2016, my motto for the day is trust yet verify. So just for fun, check out this new company that I had a briefing with earlier this week that also announced their Series A funding earlier in March 2016.

The question you have to ask yourself today is if this is an out of this world April Fools prank, an out of this world idea that will eclipse current cloud services such as Amazon Web Services (AWS), Google, IBM Softlayer, Microsoft Azure, Rackspace among others?

Or, will SpaceBelt go the way of earlier cloud high flyers HP Cloud, Nirvanix among others.

Btw, keep in mind that only you can prevent cloud data loss, however cloud and virtual data availability is also a shared responsibility.

Some Questions and Things To Ponder

  • Is this an April Fools Joke?
  • How much Non Volatile Memory (NVM) such as NAND, 3D Nand, 3D XPoint or other Storage Class Memory (SCM) can be physically placed on each bird (e.g. Satellite)
  • What will the solar panels look like to power the birds, plus battery’s for heating and cooling the NVM (contrary to popular myth, NVMs do get warm if not hot)
  • What is the availability, accessibility and durability model, how will data be replicated, mirrored or an out of this world LRC/Erasure Code Advanced Parity model be used?
  • How will the storage be accessed, what will the end-points look like, iSCSI, NDB, FUSE, NFS, CIFS, HDFS, Torrent, JSON, ODBC, REST/HTTP, FTP or something else?
  • Security will be a concern as well as geo placement, after all, its one thing to move data across some borders, how about if the data is hundreds of miles above those borders?
  • Cost will be an interesting model to follow, as well as competitors from SpaceX, Amazon, Boeing, GE, NSA, Google, Facebook or others emerge?
  • What will the uplink and download speeds be, not to mention latency of moving and accessing data from the satellites. For those who have DirectTV or other terrestrial service you know the pros and cons associated with that. Speaking of which, perhaps you have experienced a thunder-storm with DirecTV or Dish, or perhaps a cloud storm due to a cloud provider service or site failure, think about what happens to your cloud data if the satellite dish is disrupted during an upload or download.
  • I also wonder how the various industry trade groups will wrap their head around this one, what kind of new standards, initiatives and out of this world marketing promotions will we see or hear about? You know that some creative marketer will declare surface clouds as dead, just saying.

Where To Learn More

What This All Means

The folks over at cloud constellation say their space belt made up of a constellation (e.g. in orbit cluster) of satellites will be circling the globe around 2019. I wonder if they will be ready to do a proof of concept (poc) technology demonstrator of their IP using TCP based networking, server, storage I/O protocols leveraging a hot air balloon or weather balloon near term, if not, would be a great marketing ploy.

If nothing else, putting their data infrastructure technology on a hot air balloon could be a fun marketing ploy to say their cloud rises above the hot air of other cloud marketing. Or if they do a POC using a weather balloon, they could show and say their cloud rises above traditional cloud storms, oh the fun…

Check out Cloud Constellation and their Spacebelt, see for yourself and then you decide what is going on!

Remember, its April Fools day today, trust, yet verify.

What say you, is this an April Fools Joke or the next big thing?

Ok, nuff said (for now), time to listen to Pink Floyd Dark Side of the Moon ;)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Server StorageIO March 2016 Update Newsletter

Volume 16, Issue III

Hello and welcome to the March 2016 Server StorageIO update newsletter.

Here in the northern hemisphere spring has officially arrived as of March 20th equinox along with warmer weather, more hours and minutes of day light, and plenty of things to do. In addition to the official arrival of spring here (fall in the southern hemisphere), it also means in the U.S. that March Madness and college basketball tournament playoff brackets and office (betting) pools are in full swing.

In This Issue

  • Feature Topic and Themes
  • Industry Trends News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Videos and Podcast’s
  • Events and Webinars
  • Recommended Reading List
  • Industry Activity Trends
  • Server StorageIO Lab reports
  • New and Old Vendor Update
  • Resources and Links
  • A couple of other things associated with spring is to move clocks forward which occurred recently here in the U.S. Spring is also a good time to check your smoke and dangerous gas detectors or other alarms. This means replacing batteries and cleaning the detectors.

    Besides smoke and gas detectors, spring is also a good time do preventive maintenance on your battery backup uninterruptible power supplies (UPS), as well as generators and other standby power devices. For my part, I had a service tech out to do a tune up on my Kohler generator, as well as replaced some batteries in APC UPS devices.

    Besides smoke and CO2 detectors, generators and UPS standby power systems, March madness basketball and other sports tournaments, something else occurs on March 31st (besides being the day before April 1st and April fools day). March 31st is World Backup (and Restore) Day meaning an awareness on making sure your data, applications, settings, configurations, keys, software and systems are backed up, and can be recovered.

    Hopefully none of you are in the situation where data, applications, systems, computers, laptops, tablets, smart phones or other devices only get backed up or protected once a year, however maybe you know somebody who does.

    March also marks the 10th anniversary of Amazon Web Services (AWS) cloud services (more here), happy birthday AWS.

    March wraps up on the 31st with World Backup Day which is intended to draw attention to the importance of data protection and your ability to recover applications and data. While backup are important, so to are testing to make sure you can actually use and recover from what was protected. Keep in mind that while some claim backup is dead, data protection is alive and as along as vendors and others keep referring to data protection as backup, backup will stay alive.

    Join me and folks from HP Enterprise (HPE) on March 31st at 1PM ET for a free webinar compliments of HPE with a theme of Backup with Brains, emphasis on awareness and analytics to enable smart data protection. Click here to learn more and register.

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    Feature Topic and Theme

    This months feature theme and topics include backup (and restore) as part of data protection, more on clouds (public, private and hybrid) including how some providers such as DropBox are moving out of public cloud providers such as AWS building their own data centers.

    Building off of the February newsletter there is more on Google including their use of Non-Volatile Memory (NVM) aka NAND Flash Solid State Devices (SSD). and some of their research. In addition to Google’s use of SSD, check out the posts and industry activity on NVMe as well as other news and updates including new converged platforms from Cisco and HPE among others.

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    Some new Products Technology Services Announcements (PTSA) include:

  • Via Redmondmag: AWS Cloud Storage Service Turns 10 years old in March, happy birthday AWS (read more here at the AWS site).
  • Cisco announced new flexible HyperFlex converged compute server platforms for hybrid cloud and other deployments. Also announced were NetApp All Flash Array (AFA) FlexPod converged solutions powered by Cisco UCS servers and networking technology. In other activity, Cisco unveiled a Digital Network Architecture to enable customer digital data transformation. Cisco also announced its intent to acquire CliQr for management of hybrid clouds.

  • Data Direct Networks (DDN) expands NAS offerings with new GS14K platform via PRnewswire.

  • Via Computerworld: DropBox quits Amazon cloud, takes back 500 PB of data. DropBox has created their own cloud to host videos, images, files, folders, objects, blobs and other storage items that used to be stored within AWS S3. In this DropBox post, you can read about the why they decided to create their own cloud, as well as how they used a hybrid approach with metadata kept local, actual data stored in AWS S3. Now the data and the metadata are in DropBox data centers. However, DropBox is still keeping some data in AWS particular in different geographies.

  • Web site hosting company GoDaddy has extended their capabilities similar to other service providers by adding an OpenStack powered cloud service. This is a trend that others such as Bluehost (where my sites are located on a DPS) have evolved from simple shared hosting, to dedicated private servers (DPS), virtual private servers (VPS) along with other cloud related services. Think of a VPS as a virtual machine or cloud instance. Likewise some of the cloud service providers such as AWS are moving into dedicated private servers.

  • Following up from the February 2016 Server StorageIO Update Newsletter that included Google’s message to disk vendors: Make hard drives like this, even if they lose more data and Google Disk for Data Centers White Paper (PDF Here), read about Google experiences SSD.

    In this PDF white paper that was presented at the recent Usenix 2016 conference outlining Google’s experiences with different types (SLC, MLC, eMLC) and generations of NAND flash SSD media across various vendors and generations. Some of the takeaways include that context matters when looking at SSD metrics on endurance, durability and errors. While some in the industry focus on Unrecoverable Bit Error Rates (UBER), there needs to be awareness around Raw Bit Error Rate (RBER) among other metrics and usage. Read more about Google’s experiences here.


  • Hewlett Packard Enterprise (HPE) announced Hyper-Converged systems Via Marketwired including HC 380 based on ProLiant DL380 technology providing all in one (AiO) converged compute, storage and virtualization software with simplified management. The HC 380 is targeted for mid-market aka small medium business (SMB), remote office branch office (ROBO) and workgroups. HPE also announced all flash array (AFA) enhancements for 3PAR storage (Via Businesswire).

  • Microsoft has announced that it will be releasing a version of its SQL Server database on Linux. What this means is that as well as being able to use SQL Server and associated tools on Windows and Azure platforms, you will also in the not so distant future be able to deploy on Linux. By making SQL Server available on Linux opens up some interesting scenarios and solution alternatives vs. Oracle along with MySQL and associated MySQL derivatives, as well as NoSQL offerings (Read more about NoSQL Databases here). Read more about Microsoft’s SQL Server for Linux here.

    In addition to SQL Server for Linux, Microsoft has also announced enhancements for easing docker container migrations to clouds. In other Microsoft activity, they announced enhancements to Storsimple and Azure. Keep an eye out for Windows Server 2016 Tech Preview 5 (e.g. TP5) which will be the next release of the upcoming new version of the popular operating systems.


  • MSDI, Rockland IT Solutions and Source Support Services Merge to Form Congruity with CEO Todd Gresham, along with Mike Stolz and Mark Shirman (formerly of Glasshouse) among others you may know.

  • Via Businesswire: PrimaryIO announces server-based flash acceleration for VMware systems, while Riverbed extends Remote Office Branch Office (ROBO) cloud connectivity Via Businesswire.

  • Via Computerworld: Samsung ships 12Gbs SAS 15TB 2.5" 3D NAND Flash SSD (Hey Samsung, send me a device or two and will give them a test drive in the Server StorageIO lab ;). Not to be out done, Via Forbes: Seagate announces fast SSD card, as well as for the High Performance Compute (HPC) and Super Compute (SC) markets, Via HPCwire: Seagate Sets Sights on Broader HPC Market with their scale-out clustered Lustre based systems.

  • Servers Direct is now offering the HGST 4U x 60 drive enclosures while Via PRnewswire: SMIC announces RRAM partnership.

  • ATTO Technology has enhanced their RAID Arrays Behind FibreBridge 7500, while Oracle announced mainframe virtual tape library (VTL) cloud support Via Searchdatabackup. In other updates for this month, VMware has released and made generally available (GA) VSAN 6.2 and Via Businesswire: Wave and Centeris Launch Transpacific Broadband Data and Fiber Hub.
  • The above is a sampling of some of the various industry news, announcements and updates for this March. Watch for more news and updates in April coming out of NAB and OpenStack Summit among other events.

    View other recent news and industry trends here.

    StorageIO Commentary in the news

    View more Server, Storage and I/O hardware as well as software trends comments here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    • Continum – R1Soft Server Backup Manager
    • HyperIO – HiMon and HyperIO server storage I/O monitoring software tools
    • Runcast – VMware automation and management software tools
    • Opvizor – VMware health management software tools
    • Asigra – Cloud, Managed Service and distributed backup/data protection tools
    • Datera – Software defined storage management startup
    • E8 Storage – Software Defined Stealth Storage Startup
    • Venyu – Cloud and data center data protection tools
    • StorPool – Distributed software defined storage management tools
    • ExaBlox – Scale out storage solutions

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • InfoStor:  Data Protection Gaps, Some Good, Some Not So Good
    • Virtual Blocks (VMware Blogs):  Part III EVO:RAIL – When And Where To Use It?
    • InfoStor:  Object Storage Is In Your Future

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    Check out this video (Via YouTube) of a Google Data Center tour.

    In the IoT and IoD era of little and big data, how about this video I did with my Phantom DJI drone and a HD GoPro (e.g. 1K vs. 2.7K or 4K in newer cameras). This generates about a GByte of raw data per 10 minutes of flight, which then means another GB copies to a staging area, then to a protected copies, then production versions and so forth. Thus a 2 minute clip in 1080p resulted in plenty of storage including produced, uploaded versions along with backup copies in archives spread across YouTube, Dropbox and elsewhere.

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    TBA – April 27, 2016 webinar

    NAB (Las Vegas) April 19-20, 2016

    Backup with Brains – March 31, 2016 free webinar (1PM ET)

    See more webinars and other activities on the Server StorageIO Events page here.

    From StorageIO Labs

    Research, Reviews and Reports

    NVMe is in your future, resources to start preparing today for tomorrow

    NVM and NVMe corner (Via and Compliments of Micron.com)

    View more NVMe related items at microsite thenvmeplace.com.

    Read more in this Server StorageIO industry Trends Perspective white paper and lab review.

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of mine as well as books from others.

    For this months recommended reading, it’s a blog site. If you have not visited Eric Siebert’s (@ericsiebert) site vSphere-land and its companion resources pages including top blogs, do so now.

    Granted there is a heavy VMware server virtualization focus, however there is a good balance of other data infrastructure topics spanning servers, storage, I/O networking, data protection and more.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links – Various industry links (over 1,000 with more to be added soon)
    objectstoragecenter.com – Cloud and object storage topics, tips and news items
    storageioblog.com/data-protection-diaries-main/ – Various data protection items and topics
    thenvmeplace.com – Focus on NVMe trends and technologies
    thessdplace.com – NVM and Solid State Disk topics, tips and techniques
    storageio.com/performance.com – Various server, storage and I/O performance and benchmarking

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    The Future of Ethernet – 2016 Roadmap released by Ethernet Alliance

    The Future of Ethernet – 2016 Roadmap released by Ethernet Alliance

    server storage I/O trends

    The Future of Ethernet – 2016 Roadmap released by Ethernet Alliance

    Ethernet Alliance Roadmap

    The Ethernet Alliance has announced their 2016 roadmap of enhancements for Ethernet.

    Ethernet enhancements include speeds, connectivity interfaces that span needs from consumer, enterprise, to cloud and managed service providers.

    Highlights of Ethernet Roadmap

    • FlexEthernet (FlexE)
    • QSFP-DD, microQSFP and OBO interfaces
    • Speeds from 10Mbps to 400GbE.
    • 4 Pair Power over Ethernet (PoE)
    • Power over Data Line (PoDL)

    Ethernet Alliance 2016 Roadmap Image
    Images via EthernetAlliance.org

    Who is the Ethernet Alliance

    The Ethernet Alliance (@ethernetallianc) is an industry trade and marketing consortium focused on the advancement and success of Ethernet related technologies.

    Where to learn more

    The Ethernet Alliance has also made available via their web site two presentations part one here and part two here (or click on the following images).

    Ethernet Alliance 2016 roadmap presentation #1 Ethernet Alliance 2016 roadmap presentation #2

    Also visit www.ethernetalliance.org/roadmap

    What this all means

    Ethernet technologies continue to be enhanced from consumer focused, Internet of Things (IoT) and Internet of Devices (IoD) to enterprise, data centers, IT and non-IT usage as well as cloud and managed service providers. At the lower end where there is broad adoption, the continued evolution of easier to use, lower cost, interoperable technologies and interfaces expands Ethernet adoption footprint while at the higher-end, all of those IoT, IoD, consumer and other devices aggregated (consolidate) into cloud and other services that have the need for speeds from 10GbE, 40GbE, 100GbE and 400GbE.

    With the 2016 Roadmap the Ethernet Alliance has provided good direction as to where Ethernet fits today and tomorrow.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Part V – NVMe overview primer (Where to learn more, what this all means)

    This is the fifth in a five-part mini-series providing a NVMe primer overview.

    View Part I, Part II, Part III, Part IV, Part V as well as companion posts and more NVMe primer material at www.thenvmeplace.com.

    There are many different facets of NVMe including protocol that can be deployed on PCIe (AiC, U.2/8639 drives, M.2) for local direct attached, dedicated or shared for front-end or back-end of storage systems. NVMe direct attach is also found in servers and laptops using M.2 NGFF mini cards (e.g. “gum sticks”). In addition to direct attached, dedicated and shared, NVMe is also deployed on fabrics including over Fibre Channel (FC-NVMe) as well as NVMe over Fabrics (NVMeoF) leveraging RDMA based networks (e.g. iWARP, RoCE among others).

    The storage I/O capabilities of flash can now be fed across PCIe faster to enable modern multi-core processors to complete more useful work in less time, resulting in greater application productivity. NVMe has been designed from the ground up with more and deeper queues, supporting a larger number of commands in those queues. This in turn enables the SSD to better optimize command execution for much higher concurrent IOPS. NVMe will coexist along with SAS, SATA and other server storage I/O technologies for some time to come. But NVMe will be at the top-tier of storage as it takes full advantage of the inherent speed and low latency of flash while complementing the potential of multi-core processors that can support the latest applications.

    With NVMe, the capabilities of underlying NVM and storage memories are further realized Devices used include a PCIe x4 NVMe AiC SSD, 12 GbpsSAS SSD and 6 GbpsSATA SSD. These and other improvements with NVMe enable concurrency while reducing latency to remove server storage I/O traffic congestion. The result is that application demanding more concurrent I/O activity along with lower latency will gravitate towards NVMe for access fast storage.

    Like the robust PCIe physical server storage I/O interface it leverages, NVMe provides both flexibility and compatibility. It removes complexity, overhead and latency while allowing far more concurrent I/O work to be accomplished. Those on the cutting edge will embrace NVMe rapidly. Others may prefer a phased approach.

    Some environments will initially focus on NVMe for local server storage I/O performance and capacity available today. Other environments will phase in emerging external NVMe flash-based shared storage systems over time.

    Planning is an essential ingredient for any enterprise. Because NVMe spans servers, storage, I/O hardware and software, those intending to adopt NVMe need to take into account all ramifications. Decisions made today will have a big impact on future data and information infrastructures.

    Key questions should be, how much speed do your applications need now, and how do growth plans affect those requirements? How and where can you maximize your financial return on investment (ROI) when deploying NVMe and how will that success be measured?

    Several vendors are working on, or have already introduced NVMe related technologies or initiatives. Keep an eye on among others including AWS, Broadcom (Avago, Brocade), Cisco (Servers), Dell EMC, Excelero, HPE, Intel (Servers, Drives and Cards), Lenovo, Micron, Microsoft (Azure, Drivers, Operating Systems, Storage Spaces), Mellanox, NetApp, OCZ, Oracle, PMC, Samsung, Seagate, Supermicro, VMware, Western Digital (acquisition of SANdisk and HGST) among others.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What this all means

    NVMe is in your future if not already, so If NVMe is the answer, what are the questions?

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Where, How to use NVMe overview primer

    server storage I/O trends
    Updated 1/12/2018

    This is the fourth in a five-part miniseries providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    Where and how to use NVMe

    As mentioned and shown in the second post of this series, initially, NVMe is being deployed inside servers as “ back-end,” fast, low latency storage using PCIe Add-In-Cards (AIC) and flash drives. Similar to SAS NVM SSDs and HDDs that support dual-paths, NVMe has a primary path and an alternate path. If one path fails, traffic keeps flowing without causing slowdowns. This feature is an advantage to those already familiar with the dual-path capabilities of SAS, enabling them to design and configure resilient solutions.

    NVMe devices including NVM flash AIC flash will also find their way into storage systems and appliances as back-end storage, co-existing with SAS or SATA devices. Another emerging deployment configuration scenario is shared NVMe direct attached storage (DAS) with multiple server access via PCIe external storage with dual paths for resiliency.

    Even though NVMe is a new protocol, it leverages existing skill sets. Anyone familiar with SAS/SCSI and AHCI/SATA storage devices will need little or no training to carry out and manage NVMe. Since NVMe-enabled storage appears to a host server or storage appliance as an LUN or volume, existing Windows, Linux and other OS or hypervisors tools can be used. On Windows, such as,  other than going to the device manager to see what the device is and what controller it is attached to, it is no different from installing and using any other storage device. The experience on Linux is similar, particularly when using in-the-box drivers that ship with the OS. One minor Linux difference of note is that instead of seeing a /dev/sda device as an example, you might see a device name like /dev/nvme0n1 or /dev/nvme0n1p1 (with a partition).

    Keep in mind that NVMe like SAS can be used as a “back-end” access from servers (or storage systems) to a storage device or system. For example JBOD SSD drives (e.g. 8639), PCIe AiC or M.2 devices. NVMe can also like SAS be used as a “front-end” on storage systems or appliances in place of, or in addition to other access such as GbE based iSCSI, Fibre Channel, FCoE, InfiniBand, NAS or Object.

    What this means is that NVMe can be implemented in a storage system or appliance on both the “front-end” e.g. server or host side as well as on the “back-end” e.g. device or drive side that is like SAS. Another similarity to SAS is that NVMe dual-pathing of devices, permitting system architects to design resiliency into their solutions. When the primary path fails, access to the storage device can be maintained with failover so that fast I/O operations can continue when using SAS and NVMe.

    NVM connectivity options including NVMe
    Various NVM NAND flash SSD devices and their connectivity including NVMe, M2, SATA and 12 Gbps SAS are shown in figure 6.

    Various NVM SSD interfaces including NVMe and M2
    Figure 6 Various NVM flash SSDs (Via StorageIO Labs)

    Left in figure 6 is an NAND flash NVMe PCIe AiC, top center is a USB thumb drive that has been opened up showing an NAND die (chip), middle center is a mSATA card, bottom center is an M.2 card, next on the right is a 2.5” 6 Gbps SATA device, and far fright is a 12 Gbps SAS device. Note that an M.2 card can be either an SATA or NVMe device depending on its internal controller that determines which host or server protocol device driver to use.

    The role of PCIe has evolved over the years as has its performance and packaging form factors. Also, to add in card (AiC) slots, PCIe form factors also include M.2 small form factor that replaces legacy mini-PCIe cards. Another form factor is M.2 (aka Next Generation Form Factor or NGFF) that like other devices, can be an NVMe, or SATA device.

    NGFF also known as 8639 or possibly 8637 (figure 7) can be used to support SATA as well as NVMe depending on the card device installed and host server driver support. There are various M.2 NGFF form factors including 2230, 2242, 2260 and 2280. There are also M.2 to regular physical SATA converter or adapter cards that are available enabling M.2 devices to attach to legacy SAS/SATA RAID adapters or HBAs.

    NVMe 8637 and 8639 interface backplane slotsNVMe 8637 and 8639 interface
    Figure 7 PCIe NVMe 8639 Drive (Via StorageIO Labs)

    On the left of figure 7 is a view towards the backplane of a storage enclosure in a server that supports SAS, SATA, and NVMe (e.g. 8639). On the right of figure 7 is the connector end of an 8639 NVM SSD showing addition pin connectors compared to an SAS or SATA device. Those extra pins give PCIe x4 connectivity to the NVMe devices. The 8639 drive connectors enable a device such as an NVM, or NAND flash SSD to share a common physical storage enclosure with SAS and SATA devices, including optional dual-pathing.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Be careful judging a device or component by its physical packaging or interface connection about what it is or is not. In figure 6.6 the device has SAS/SATA along with PCIe physical connections, yet it’s what’s inside (e.g. its controller) that determines if it is an SAS, SATA or NVMe enabled device. This also applies to HDDs and PCIe AiC devices, as well as I/O networking cards and adapters that may use common physical connectors, yet implement different protocols. For example, the SFF-8643 HD-Mini SAS internal connector is used for 12 Gbps SAS attachment as well as PCIe to devices such as 8630.

    Depending on the type of device inserted, access can be via NVMe over PCIe x4, SAS (12 Gbps or 6Gb) or SATA. 8639 connector based enclosures have a physical connection with their backplanes to the individual drive connectors, as well as to PCIe, SAS, and SATA cards or connectors on the server motherboard or via PCIe riser slots.

    While PCIe devices including AiC slot based, M.2 or 8639 can have common physical interfaces and lower level signaling, it’s the protocols, controllers, and drivers that determine how they get a software defined and used. Keep in mind that it’s not just the physical connector or interface that determines what a device is or how it is used, it’s also the protocol, command set, and controller and device drivers.

    Continue reading about NVMe with Part V (Where to learn more, what this all means) in this five-part series, or jump to Part I, Part II or Part III.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    NVMe Need for Performance Speed Performance

    server storage I/O trends
    Updated 1/12/2018

    This is the third in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    How fast is NVMe?

    It depends! Generally speaking NVMe is fast!

    However fast interfaces and protocols also need fast storage devices, adapters, drivers, servers, operating systems and hypervisors as well as applications that drive or benefit from the increased speed.

    A server storage I/O example is in figure 5 where a 6 Gbps SATA NVM flash SSD (left) is shown with an NVMe 8639 (x4) drive that were directly attached to a server. The workload is 8 Kbyte sized random writes with 128 threads (workers) showing results for IOPs (solid bar) along with response time (dotted line). Not surprisingly the NVMe device has a lower response time and a higher number of IOPs. However also note how the amount of CPU time used per IOP is lower on the right with the NVMe drive.

    NVMe storage I/O performance
    Figure 5 6 Gbps SATA NVM flash SSD vs. NVMe flash SSD

    While many people are aware or learning about the IOP and bandwidth improvements as well as the decrease in latency with NVMe, something that gets overlooked is how much less CPU is used. If a server is spending time in wait modes that can result in lost productivity, by finding and removing the barriers more work can be done on a given server, perhaps even delaying a server upgrade.

    In figure 5 notice the lower amount of CPU used per work activity being done (e.g. I/O or IOP) which translates to more effective resource use of your server. What that means is either doing more work with what you have, or potentially delaying a CPU server upgrade, or, using those extra CPU cycles to power software defined storage management stacks including erasure coding or advanced parity RAID, replication and other functions.

    Table 1 shows relative server I/O performance of some NVM flash SSD devices across various workloads. As with any performance, the comparison takes them, and the following with a grain of salt as your speed will vary.

    8KB I/O Size

    1MB I/O size

    NAND flash SSD

    100% Seq. Read

    100% Seq. Write

    100% Ran. Read

    100% Ran. Write

    100% Seq. Read

    100% Seq. Write

    100% Ran. Read

    100% Ran. Write

    NVMe

    IOPs

    41829.19

    33349.36

    112353.6

    28520.82

    1437.26

    889.36

    1336.94

    496.74

    PCIe

    Bandwidth

    326.79

    260.54

    877.76

    222.82

    1437.26

    889.36

    1336.94

    496.74

    AiC

    Resp.

    3.23

    3.90

    1.30

    4.56

    178.11

    287.83

    191.27

    515.17

    CPU / IOP

    0.001571

    0.002003

    0.000689

    0.002342

    0.007793

    0.011244

    0.009798

    0.015098

    12Gb

    IOPs

    34792.91

    34863.42

    29373.5

    27069.56

    427.19

    439.42

    416.68

    385.9

    SAS

    Bandwidth

    271.82

    272.37

    229.48

    211.48

    427.19

    429.42

    416.68

    385.9

    Resp.

    3.76

    3.77

    4.56

    5.71

    599.26

    582.66

    614.22

    663.21

    CPU / IOP

    0.001857

    0.00189

    0.002267

    0.00229

    0.011236

    0.011834

    0.01416

    0.015548

    6Gb

    IOPs

    33861.29

    9228.49

    28677.12

    6974.32

    363.25

    65.58

    356.06

    55.86

    SATA

    Bandwidth

    264.54

    72.1

    224.04

    54.49

    363.25

    65.58

    356.06

    55.86

    Resp.

    4.05

    26.34

    4.67

    35.65

    704.70

    3838.59

    718.81

    4535.63

    CPU / IOP

    0.001899

    0.002546

    0.002298

    0.003269

    0.012113

    0.032022

    0.015166

    0.046545

    Table 1 Relative performance of various protocols and interfaces

    The workload results in table 1 were generated using a vdbench script running on a Windows 2012 R2 based server and are intended to be a relative indicator of different protocol and interfaces; your performance mileage will vary. The results shown below compare the number of IOPs (activity rate) for reads, writes, random and sequential across small 8KB and large 1MB sized I/Os.

    Also shown in table 1 are bandwidth or throughput (e.g. amount of data moved), response time and the amount of CPU used per IOP. Note in table 1 how NVMe can do higher IOPs with a lower CPU per IOP, or, using a similar amount of CPU, do more work at a lower latency. SSD has been used for decades to help reduce CPU bottlenecks or defer server upgrades by removing I/O wait times and reduce CPU consumption (e.g. wait or lost time).

    Can NVMe solutions run faster than those shown above? Absolutely!

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Continue reading about NVMe with Part IV (Where and How to use NVMe) in this five-part series, or jump to Part I, Part II or Part V.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Different NVMe Configurations

    server storage I/O trends
    Updated 1/12/2018

    This is the second in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    The many different faces or facets of NVMe configurations

    NVMe can be deployed and used in many ways, the following are some examples to show you its flexibility today as well as where it may be headed in the future. An initial deployment scenario is NVMe devices (e.g. PCIe cards, M2 or 8639 drives) installed as storage in servers or as back-end storage in storage systems. Figure 2 below shows a networked storage system or appliance that uses traditional server storage I/O interfaces and protocols for front-end access, however with back-end storage being all NVMe, or a hybrid of NVMe, SAS and SATA devices.
    NVMe as back-end server storage I/O interface to NVM
    Figure 2 NVMe as back-end server storage I/O interface to NVM storage

    A variation of the above is using NVMe for shared direct attached storage (DAS) such as the EMC DSSD D5. In the following scenario (figure 3), multiple servers in a rack or cabinet configuration have an extended PCIe connection that attached to a shared storage all flash array using NVMe on the front-end. Read more about this approach and EMC DSSD D5 here or click on the image below.

    EMC DSSD D5 NVMe
    Figure 3 Shared DAS All Flash NVM Storage using NVMe (e.g. EMC DSSD D5)

    Next up in figure 4 is a variation of the previous example, except NVMe is implemented over an RDMA (Remote Direct Memory Access) based fabric network using Converged 10GbE/40GbE or InfiniBand in what is known as RoCE (RDMA over Converged Ethernet pronounced Rocky).

    NVMe over Fabric RoCE
    Figure 4 NVMe as a “front-end” interface for servers or storage systems/appliances

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Watch for more topology and configuration options as NVMe along with associated hardware, software and I/O networking tools and technologies emerge over time.

    Continue reading about NVMe with Part III (Need for Performance Speed) in this five-part series, or jump to Part I, Part IV or Part V.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    NVMe overview primer

    server storage I/O trends
    Updated 2/2/2018

    This is the first in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    What is NVM Express (NVMe)

    Non-Volatile Memory (NVM) includes persistent memory such as NAND flash and other forms Solid State Devices (SSD). NVM express (NVMe) is a new server storage I/P protocol alternative to AHCI/SATA and the SCSI protocol used by Serial Attached SCSI (SAS). Note that the name NVMe is owned and managed by the industry trade group for NVM Express is (www.nvmexpress.org).

    The key question with NVMe is not if, rather when, where, why, how and with what will it appear in your data center or server storage I/O data infrastructure. This is a companion to material that I have on my micro site www.thenvmeplace.com that provides an overview of NVMe, as well as helps to discuss some of the questions about NVMe.

    Main features of NVMe include among others:

    • Lower latency due to improve drivers and increased queues (and queue sizes)
    • Lower CPU used to handler larger number of I/Os (more CPU available for useful work)
    • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
    • Bandwidth improvements leveraging various fast PCIe interface and available lanes
    • Dual-pathing of devices like what is available with dual-path SAS devices
    • Unlock the value of more cores per processor socket and software threads (productivity)
    • Various packaging options, deployment scenarios and configuration options
    • Appears as a standard storage device on most operating systems
    • Plug-play with in-box drivers on many popular operating systems and hypervisors

    Why NVMe for Server Storage I/O?
    NVMe has been designed from the ground up for accessing fast storage including flash SSD leveraging PCI Express (PCIe). The benefits include lower latency, improved concurrency, increased performance and the ability to unleash a lot more of the potential of modern multi-core modern processors.

    NVMe Server Storage I/O
    Figure 1 shows common server I/O connectivity including PCIe, SAS, SATA and NVMe.

    NVMe, leveraging PCIe, enables modern applications to reach their full potential. NVMe is one of those rare, generational protocol upgrades that comes around every couple of decades to help unlock the full performance value of servers and storage. NVMe does need new drivers, but once in place, it plugs and plays seamlessly with existing tools, software and user experiences. Likewise many of those drivers are now in the box (e.g. ship with) for popular operating systems and hypervisors.

    While SATA and SAS provided enough bandwidth for HDDs and some SSD uses, more performance is needed. NVMe near-term does not replace SAS or SATA they can and will coexist for years to come enabling different tiers of server storage I/O performance.

    NVMe unlocks the potential of flash-based storage by allowing up to 65,536 (64K) queues each with 64K commands per queue. SATA allowed for only one command queue capable of holding 32 commands per queue and SAS supports a queue with 64K command entries. As a result, the storage IO capabilities of flash can now be fed across PCIe much faster to enable modern multi-core processors to complete more useful work in less time.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Continue reading about NVMe with Part II (Different NVMe configurations) in this five-part series, or jump to Part III, Part IV or Part V.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Server StorageIO February 2016 Update Newsletter

    Volume 16, Issue II

    Hello and welcome to the February 2016 Server StorageIO update newsletter.

    Even with an extra day during the month of February, there was a lot going on in a short amount of time. This included industry activity from servers to storage and I/O networking, hardware, software, services, mergers and acquisitions for cloud, virtual, containers and legacy environments. Check out the sampling of some of the various industry activities below.

    Meanwhile, its now time for March Madness which also means metrics that matter and getting ready for World Backup Data on March 31st. Speaking of World Backup Day, check out the StorageIO events and activities page for a webinar on March 31st involving data protection as part of smart backups.

    While your focus for March may be around brackets and other related themes, check out the Carnegie Mellon University (CMU) white paper listed below that looks at NAND flash SSD failures at Facebook. Some of the takeaways involve the importance of cooling and thermal management for flash, as well as wear management and role of flash translation layer firmware along with controllers.

    Also see the links to the Google White Paper on their request to the industry for a new type of Hard Disk Drive (HDD) to store capacity data while SSD’s handle the IOP’s. The take away is that while Google uses a lot of flash SSD for high performance, low latency workloads, they also need to have a lot of high-capacity bulk storage that is more affordable on a cost per capacity basis. Google also makes several proposals and suggestions to the industry on what should and can be done on a go forward basis.

    Backblaze also has a new report out on their 2015 HDD reliability and failure analysis which makes for an interesting read. One of the take away’s is that while there are newer, larger capacity 6TB and 8TB drives, Backblaze is leveraging the lower cost per capacity of 4TB drives that are also available in volume quantity.

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    In This Issue

  • StorageIOblog posts
  • Industry Activity Trends
  • New and Old Vendor Update
  • Events and Webinars
  • StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I
      and Part II – EMC DSSD D5 Direct Attached Shared AFA
      EMC announced the general availability of their DSSD D5 Shared Direct Attached SSD (DAS) flash storage system (e.g. All Flash Array or AFA) which is a rack-scale solution. If you recall, EMC acquired DSSD back in 2014 which you can read more about here. EMC announced four configurations that include 36TB, 72TB and 144TB raw flash SSD capacity with support for up to 48 dual-ported host client servers.
    • Various Hardware (SAS, SATA, NVM, M2) and Software (VHD) Defined Odd’s and Ends
      Ever need to add another GbE port to a small server, workstation or perhaps Intel NUC, however no PCIe slots are available? How about attaching a M2 form factor flash SSD card to a server or device that does not have an M2 port, or, for mirroring two M2 cards together with a RAID adapter? Looking for tool to convert a Windows system to a Virtual Hard Disk (VHD) while it is running? The following are a collection of odd’s and end’s devices and tools for hardware and software defining your environment.
    • Software Defined Storage Virtual Hard Disk (VHD) Algorithms + Data Structures
      For those who are into, or simply like to talk about software defined storage (SDS), API’s, Windows, Virtual Hard Disks (VHD) or VHDX, or Hyper-V among other related themes, have you ever actually looked at the specification for VHDX? If not, here is the link to the open specification that Microsoft published (this one dates back to 2012).
    • Big Files and Lots of Little File Processing and Benchmarking with Vdbench
      Need to test a server, storage I/O networking, hardware, software, services, cloud, virtual, physical or other environment that is either doing some form of file processing, or, that you simply want to have some extra workload running in the background for what ever reason?

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    Some new Products Technology Services Announcements (PTSA) include:

  • Tegile – IntelliFlash HD Now Available To Enterprises Worldwide
  • Via Forbes – Competitors and Cash Bleed Put Pressure on Pure Storage
  • Via HealthCareBusiness – Philips and Amazon team up on cloud-based health record storage
  • Via Zacks – IBM Advances Hybrid Cloud Object Based Storage
  • DataONstorage expands Microsoft Hyper Converged Infrastructure platforms
  • Via ITBusinessEdge – Nimble updates All Flash Array (AFA) storage
  • Carnegie Mellon University – A Large-Scale Study of Flash Memory Failures
  • Cisco Buys Cliqr Cloud Orchestration
  • Backblaze – 2015 Hard Drive Reliability Reports and Analysis
  • Via BusinessCloudNews – Verizon Closing Down Its Public Cloud
  • Via BusinessInsider – US Government Approves Dell and EMC Deal
  • EMC and VMware announce new VCE VxRAIL Converged Solutions
  • EMC announces new IBM zSeries Mainframe enhancements for VMAX
  • EMC announces new DSSD D5 AFA and VMAX AFA enhancements
  • HPE announces enhancements to StoreEasy 1650 storage
  • Seagate now shipping worlds slimmest and fastest 2TB mobile HDD
  • Via VMblog – Oracle Scoops Up Ravello to Boosts Its Public Cloud Offerings
  • Via Investors – SSD and Chinese Investments in Western Digital
  • ATTO announces 32G (e.g. Gen 6) Fibre Channel adapters
  • Google to disk vendors: Make hard drives like this, even if they lose more data
  • Google Disk for Data Centers White Paper (PDF Here)
  • View other recent news and industry trends here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    • SkySync – Enterprise File Sync and Share
    • SANblaze – Storage protocol emulation tools
    • OpenIT – DCIM and Data Infrastructure Management Tools
    • Infinit.sh – Decentralized Software Based File Storage Platform
    • Alluxio –
      Open Source Software Defined Storage Abstraction Layer
    • Genie9
      Backup and Data Protection Tools
    • E8 Storage – Software Defined Stealth Storage Startup

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

     

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    NAB (Las Vegas) April 19-20, 2016

    March 31, 2016 Webinar (1PM ET) – Smart Backup and World Backup Day

    February 25, 2016 Webinar (11AM PT) – Migrating to Hyper-V including from Vmware

    February 24, 2016 Webinar (11AM ET) – How To Become a Data Protection Hero

    February 23, 2016 Webinar (11AM PT) – Rethinking Data Protection

    January 19, 2016 Webinar (9AM PT) – Solve Virtualization Performance Issues Like a Pro

    See more webinars and other activities on the Server StorageIO Events page here.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links,
    objectstoragecenter.com, storageioblog.com/data-protection-diaries-main/,
    thenvmeplace.com, thessdplace.com and storageio.com/performance among others.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II – EMC DSSD D5 Direct Attached Shared AFA

    Part II – EMC DSSD D5 Direct Attached Shared AFA

    server storage I/O trends

    This is the second post in a two-part series on the EMC DSSD D5 announcement, you can read part one here.

    Lets take a closer look at how EMC DSSD D5 works, its hardware and software components, how it compares and other considerations.

    How Does DSSD D5 Work

    Up to 48 Linux servers attach via dual port PCIe Gen 3 x8 cards that are stateless. Stateless simply means they do not have any flash or are not being used as storage cards, rather, they are essentially just an NVMe adapter card. With the first release block, HDFS file along with object and APIs are available for Linux systems. These drivers enabling the shared NVMe storage to be accessed by applications using different streamlined server and storage I/O driver software stacks to cut latency. DSSD D5 is meant to be a rack scale solutions so distance is measured as inside a rack (e.g. a couple of meters).

    The 5U tall DSSD D5 supports 48 servers via a pair of I/O Modules (IOM) each with 48 ports that in turn attach to the data plane and on to the Flash Modules (FM). Also attached to the data plane are a pair of controllers that are active / active for performing management tasks, however they do not sit in the data path. This means that host client directly access the FMs without having to go through a controller which is the case in traditional storage systems and AFAs. The controllers only get involved when there is some setup, configuration or other management activities, otherwise they get out-of-the-way, kind of like how management should function. There when you need them to help, then get out-of-the-way so productive work can be done.

    EMC DSSD shared ssd das
    Pardon the following hand drawn sketches, you can see some nice pretty diagrams, videos and other content via the EMC Pulse Blog as well as elsewhere.

    Note that the host client servers take on the responsibility for managing and coordinating data consistency meaning data can be shared between servers assuming applicable software is used for implementing integrity. This means that clustering and other software that can support shared storage are able to support low latency high performance read and write activity the DSSD D5 as opposed to relying on the underlying storage system for handling the shared storage coordination such as in a NAS. Another note is that the DSSD D5 is optimized for concurrent multi-threaded and asynchronous I/O operations along with atomic writes for data integrity that enable the multiple cores in today’s faster processors to be more effectively leveraged.

    The data plane is a mesh or switch or expander based back plane enabling any of the north bound (host client-server) 96 (2 x 48) PCIe Gen 3 x4 ports to reach the up to 36 (or as few as 18) FMs that are also dual pathed. Note that the host client-server PCIe dual port cards are Gen 3 x8 while the DSSD D5 ports are Gen 3 x4. Simple math should tell you that if are going to have 2 x PCIe Gen 3 x4 ports running at full speed, you want to have a Gen 3 x8 connection inside the server to get full performance.

    Think of the data plane similar to how a SAS expander works in an enclosure or a SAS switch, the difference being it is PCIe and not SAS or other protocol. Note that even though the terms mesh, fabric, switch, network are used, these are NOT attached to traditional LAN, SAN, NAS or other networks. Instead, this is a private “networked back plane” between the server and storage devices (e.g. FM).

    EMC DSSD D5 details

    The dual controllers (e.g. control plane) over see the flash management including garbage collection among other tasks, as well as storage is thin provisioned.

    Dual Controllers (active/active) are connected to each other (e.g. control plane) as well as to the data path, however, do not sit in the data path. Thus this is a fast path control path approach meaning the controllers can get involved to do management functions when needed, and get out-of-the-way of work when not needed. The controllers are hot-swap and add global management functions including setting up, tearing down host client/server I/O paths, mappings and affinities. Controllers also support the management of CUBIC RAID data protection functions performed by the Flash Modules (FM).

    Other functions the controllers implement leveraging their CPUs and DRAM include flash translation layer (FTL) functions normally handled by SSD cards, drives or other devices. These FTL functions include wear-leveling for durability, garbage collection, voltage power management among other tasks. The result is that the flash modules are able to spend more time and their resources handling I/O operations vs. handling management tasks vs. traditional off the shelf SSD drives, cards or devices.

    The FMs insert from the front and come in two sizes of 2TB and 4TB of raw NAND capacity. What’s different about the FMs vs. some other vendors approach is that these are not your traditional PCIe flash cards, instead they are custom cards with a proprietary ASIC and raw nand dies. DRAM is used in the FM as a buffer to hold data for write optimization as well as enhance wear-leveling to increase flash endurance.

    The result is up to thousands of nand dies spread over up to 36 FMs however more important, more performance being derived out of those resources. The increased performance comes from DSSD implementing its own flash translation layer, garbage collection, power voltage management among other techniques to derive more useful work per watt of energy consumed.

    EMC DSSD performance claims:

    • 100 microsecond latency for small IOs
    • 100GB bandwidth for large IOs
    • 10 Million small IO IOPs
    • Up to 144TB raw capacity

    How Does It Compare To Other AFA and SSD solutions

    There will be many apples to oranges comparisons as is often the case with new technologies or at least until others arrive in the market.

    Some general comparisons that may be apples to oranges as opposed to apples to apples include:

    • Shared and dense fast nand flash (eMLC) SSD storage
    • disaggregated flash SSD storage from server while enabling high performance, low latency
    • Eliminate pools or ponds of dedicated SSD storage capacity and performance
    • Not a SAN yet more than server-side flash or flash SSD JBOD
    • Underlying Flash Translation Layer (FTL) is disaggregated from SSD devices
    • Optimized hardware and software data path
    • Requires special server-side stateless adapter for accessing shared storage

    Some other comparisons include:

    • Hybrid and AFA shared via some server storage I/O network (good sharing, feature rich, resilient, slower performance and higher latency due to hardware, network and server I/O software stacks). For example EMC VMAX, VNX, XtremIO among others.
    • Server attached flash SSD aka server SAN (flash SSD creates islands of technology, lower resource sharing, data shuffling between servers, limited or no data services, management complexity). For example PCIe flash SSD state full (persistent) cards where data is stored or used as a cache along with associated management tools and drivers.
    • DSSD D5 is a rack-scale hybrid approach combing direct attached shared flash with lower latency, higher performance vs. traditional AFA or hybrid storage array, better resource usage, sharing, management and performance vs. traditional dedicated server flash. Compliment server-side data infrastructure and applications scale-out software. Server applications can reach NVMe storage via user spacing with block, hdfs, Flood and other APIs.

    Using EMC DSSD D5 in possible hybrid ways

    What Happened to Server PCIe cards and Server SANs

    If you recall a few years ago the industry rage was flash SSD PCIe server cards from vendors such as EMC, FusionIO (now part of SANdisk), Intel (still Intel), LSI (now part of Seagate), Micron (still Micron) and STEC (now part of Western Digital) among others. Server side flash SSD PCIe cards are still popular particular with newer NVMe controller based models that use the NVMe protocol stack instead of AHC/SATA or others.

    However as is often the case, things evolve and while there is still a place for server-side state full PCIe flash cards either for data or as cache, there is also the need to combine and simplify management, as well as streamline the software I/O stacks which is where EMC DSSD D5 comes into play. It enables consolidation of server-side SSD cards into a shared 5U chassis enabling up to 48 dual pathed servers access to the flash pools while using streamlined server software stacks and drivers that leverage NVMe over PCIe.

    Where to learn more

    Continue reading with the following links about NVMe, flash SSD and EMC DSSD.

  • Part one of this series here and part two here.
  • Performance Redefined! Introducing DSSD D5 Rack-Scale Flash Solution (EMC Pulse Blog)
  • EMC Unveils DSSD D5: A Quantum Leap In Flash Storage (EMC Press Release)
  • EMC Declares 2016 The “Year of All-Flash” For Primary Storage (EMC Press Release)
  • EMC DSSD D5 Rack-Scale Flash (EMC PDF Overview)
  • EMC DSSD and Cloudera Evolve Hadoop (EMC White Paper Overview)
  • Software Aspects of The EMC DSSD D5 Rack-Scale Flash Storage Platform (EMC PDF White Paper)
  • EMC DSSD D5 (EMC PDF Architecture and Product Specification)
  • EMC VFCache respinning SSD and intelligent caching (Part II)
  • EMC To Acquire DSSD, Inc., Extends Flash Storage Leadership
  • Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • Learn more about flash SSD here and NVMe here at thenvmeplace.com
  • What this all means

    EMC with DSSD D5 now has another solution to offer clients, granted their challenge as it has been over the past couple of decades now will be to educate and compensate their sales force and partners on what technology solution to put for different needs.

    On one hand, life could be simpler for EMC if they only had one platform solution that would then be the answer to every problem, something that some other vendors and startups face. Likewise, if all you have is one solution, then while you can try to make that solution fit different environments, or, get the environment to adapt to the solution, having options is a good thing if those options can remove complexity along with cost while boosting productivity.

    I would like to see support for other operating systems such as Windows, particular with the future Windows 2016 based Nano, as well as hypervisors including VMware, Hyper-V among others. On the other hand I also would like to see a Sharp Aquous Quattron 80" 1080p 240Hz 3D TV on my wall to watch HD videos from my DJI Phantom Drone. For now focusing on Linux makes sense, however, would be nice to see some more platforms supported.

    Keep an eye on the NVMe space as we are seeing NVMe solutions appearing inside servers, storage system, external dedicated and shared, as well as some other emerging things including NVMe over Fabric. Learn more about EMC DSSD D5 here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I

    EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I

    server storage I/O trends

    This is the first post in a two-part series pertaining to the EMC DSSD D5 announcement, you can read part two here.

    EMC announced today the general availability of their DSSD D5 Shared Direct Attached SSD (DAS) flash storage system (e.g. All Flash Array or AFA) which is a rack-scale solution. If you recall, EMC acquired DSSD back in 2014 which you can read more about here. EMC announced four configurations that include 36TB, 72TB and 144TB raw flash SSD capacity with support for up to 48 dual-ported host client servers.

    Via EMC Pulse Blog

    What Is DSSD D5

    At a high level EMC DSSD D5 is a PCIe direct attached SSD flash storage solution to enable aggregation of disparate SSD card functionality typically found in separate servers into a shared system without causing aggravation. DSSD D5 helps to alleviate server side I/O bottlenecks or aggravation issues that can be the result of aggregation of workloads or data. Think of DSSD D5 as an shared application server storage I/O accelerator for up to 48 servers to access up to 144TB of raw flash SSD to support various applications that have the need for speed.

    Applications that have the need for speed or that can benefit from less time waiting for results, where time is money, or boosting productivity can enable high profitability computing. This includes legacy as well as emerging applications and workloads spanning little data, big data and big fast structure and unstructured data. From Oracle to SAS to HBASE and Hadoop among others, perhaps even Alluxio.

    Some examples include:

    • Clusters and scale-out grids
    • High Performance COMpute (HPC)
    • Parallel file systems
    • Forecasting and image processing
    • Fraud detection and prevention
    • Research and analytics
    • E-commerce and retail
    • Search and advertising
    • Legacy applications
    • Emerging applications
    • Structured database and key-value repositories
    • Unstructured file systems, HDFS and other data
    • Large undefined work sets
    • From batch stream to real-time
    • Reduces run times from days to hours

    Where to learn more

    Continue reading with the following links about NVMe, flash SSD and EMC DSSD.

  • Part one of this series here and part two here.
  • Performance Redefined! Introducing DSSD D5 Rack-Scale Flash Solution (EMC Pulse Blog)
  • EMC Unveils DSSD D5: A Quantum Leap In Flash Storage (EMC Press Release)
  • EMC Declares 2016 The “Year of All-Flash” For Primary Storage (EMC Press Release)
  • EMC DSSD D5 Rack-Scale Flash (EMC PDF Overview)
  • EMC DSSD and Cloudera Evolve Hadoop (EMC White Paper Overview)
  • Software Aspects of The EMC DSSD D5 Rack-Scale Flash Storage Platform (EMC PDF White Paper)
  • EMC DSSD D5 (EMC PDF Architecture and Product Specification)
  • EMC VFCache respinning SSD and intelligent caching (Part II)
  • EMC To Acquire DSSD, Inc., Extends Flash Storage Leadership
  • Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • Learn more about flash SSD here and NVMe here at thenvmeplace.com
  • What this all means

    Today’s legacy, and emerging applications have the need for speed, and where the applications may not need speed, the users as well as Internet of Things (IoT) that depend upon, or feed those applications do need things to move faster. Fast applications need fast software and hardware to get the same amount of work done faster with less wait delays, as well as process larger amounts of structured and unstructured little data, big data and very fast big data.

    Different applications along with the data infrastructures they rely upon including servers, storage, I/O hardware and software need to adapt to various environments, one size, one approach model does not fit all scenarios. What this means is that some applications and data infrastructures will benefit from shared direct attached SSD storage such as rack scale solutions using EMC DSSD D5. Meanwhile other applications will benefit from AFA or hybrid storage systems along with other approaches used in various ways.

    Continue reading part two of this series here including how EMC DSSD D5 works and more perspectives.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Various Hardware (SAS, SATA, NVM, M2) and Software (VHD) Defined Odd’s and Ends

    Various Hardware (SAS, SATA, NVM, M2) and Software (VHD) Defined Odd’s and Ends

    server storage I/O trends

    Ever need to add another GbE port to a small server, workstation or perhaps Intel NUC, however no PCIe slots are available? How about attaching a M2 form factor flash SSD card to a server or device that does not have an M2 port, or, for mirroring two M2 cards together with a RAID adapter? Looking for tool to convert a Windows system to a Virtual Hard Disk (VHD) while it is running? The following are a collection of odd’s and end’s devices and tools for hardware and software defining your environment.

    Adding GbE Ports Without PCIe Ports

    Adding Ethernet ports or NICs is relatively easy with larger servers, assuming you have available PCIe slots.

    However what about when you are limited or out of PCIe ports? One option is to use USB (preferably USB 3) to GbE connectors. Another option is if you have an available mSATA card slot, such as on a server or workstation that had a WiFi card you no longer need to use, is get a mSATA to GbE kit (shown below). Granted you might have to get creative with the PCIe bracket depending on what you are going to put one of these into.

    mSATA to GbE and USB to GbE
    Left mSATA to GbE port, Right USB 3 (Blue) to GbE connector

    Tip: Some hypervisors may not like the USB to GbE, or have drivers for the mSATA to GbE connector, likewise some operating systems do not have in the box drivers. Start by loading GbE drivers such as those needed for RealTek NICs and you may end up with plug and play.

    SAS to SATA Interposer and M2 to SATA docking card

    In the following figure on the left is a SAS to SATA interposer which enables a SAS HDD or SSD to connect to a SATA connector (power and data). Keep in mind that SATA devices can attach to SAS ports, however the usual rule of thumb is that SAS devices can not attach to a SATA port or controller. To prevent that from occurring, the SAS and SATA connectors have different notched connectors that prevent a SAS device from plugging into a SATA connector.

    Where the SAS to SATA interposers come into play is that some servers or systems have SAS controllers, however their drive bays have SATA power and data connectors. Note that the key here is that there is a SAS controller, however instead of a SAS connector to the drive bay, a SATA connector is used. To get around this, interposers such as the one above allows the SAS device to attach to the SATA connector which in turn attached to the SAS controller.

    SAS SATA interposer and M2 to SATA docking card
    Left SAS to SATA interposer, Right M2 to SATA docking card

    In the above figure on the right, is an M2 NVM nand flash SSD card attached to a M2 to SATA docking card. This enables M2 cards that have SATA protocol controllers (as opposed to M2 NVMe) to be attached to a SATA port on an adapter or RAID card. Some of these docking cards can also be mounted in server or storage system 2.5" (or larger) drive bays. You can find both of the above at Amazon.com as well as many other venues.

    P2V and Creating VHD and VHDX

    I like and use various Physical to Virtual (P2V) as well as Virtual to Virtual (V2V) and even Virtual to Physical (V2P) along with Virtual to Cloud (V2C) tools including those from VMware (vCenter Converter), Microsoft (e.g. Microsoft Virtual Machine Converter) among others. Likewise Clonezilla, Acronis and many other tools are in the toolbox. One of those other tools that is handy for relatively quickly making a VHD or VHDX out of a running Windows server is disk2vhd.

    disk2vhd

    Now you should ask, why not just use the Microsoft Migration tool or VMware converter?

    Simple, if you use those or other tools and run into issues with GPT vs MBR or BIOS vs UEFI settings among others, disk2vhd is a handy work around. Simply install it, tell it where to create the VHD or VHDX (preferably on another device), start the creation, when done, move the VHDX or VHD to where needed and go from there.

    Where do you get disk2vhd and how much does it cost?

    Get it here from Microsoft Technet Windows Sysinternals page and its free.

    Where to learn more

    Continue reading about the above and other related topics with these links.

  • Server storage I/O Intel NUC nick knack notes – Second impressions
  • Some Windows Server Storage I/O related commands
  • Server Storage I/O Cables Connectors Chargers & other Geek Gifts
  • The NVM (Non Volatile Memory) and NVMe Place (Non Volatile Memory Express)
  • Nand flash SSD and NVM server storage I/O memory conversations
  • Cloud Storage for Camera Data?
  • Via @EmergencyMgtMag Cloud Storage for Camera Data?

  • Software Defined Storage Virtual Hard Disk (VHD) Algorithms + Data Structures
  • Part II 2014 Server Storage I/O Geek Gift ideas
  • What this all means

    While the above odd’s and end’s tips, tricks, tools and technology may not be applicable for your production environment, perhaps they will be useful for your test or home lab environment needs. On the other hand, the above may not be practically useful for anything, yet simply entertaining, the rest is up to you as if there is any return on investment, or, perhaps return on innovation from use these or other odd’s, end’s tips and tricks that might be outside of the traditional box so to speak.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Big Files Lots of Little File Processing Benchmarking with Vdbench

    Big Files Lots of Little File Processing Benchmarking with Vdbench


    server storage data infrastructure i/o File Processing Benchmarking with Vdbench

    Updated 2/10/2018

    Need to test a server, storage I/O networking, hardware, software, services, cloud, virtual, physical or other environment that is either doing some form of file processing, or, that you simply want to have some extra workload running in the background for what ever reason? An option is File Processing Benchmarking with Vdbench.

    I/O performance

    Getting Started


    Here’s a quick and relatively easy way to do it with Vdbench (Free from Oracle). Granted there are other tools, both for free and for fee that can similar things, however we will leave those for another day and post. Here’s the con to this approach, there is no Uui Gui like what you have available with some other tools Here’s the pro to this approach, its free, flexible and limited by your creative, amount of storage space, server memory and I/O capacity.

    If you need a background on Vdbench and benchmarking, check out the series of related posts here (e.g. www.storageio.com/performance).

    Get and Install the Vdbench Bits and Bytes


    If you do not already have Vdbench installed, get a copy from the Oracle or Source Forge site (now points to Oracle here).

    Vdbench is free, you simply sign-up and accept the free license, select the version down load (it is a single, common distribution for all OS) the bits as well as documentation.

    Installation particular on Windows is really easy, basically follow the instructions in the documentation by copying the contents of the download folder to a specified directory, set up any environment variables, and make sure that you have Java installed.

    Here is a hint and tip for Windows Servers, if you get an error message about counters, open a command prompt with Administrator rights, and type the command:

    $ lodctr /r


    The above command will reset your I/O counters. Note however that command will also overwrite counters if enabled so only use it if you have to.

    Likewise *nix install is also easy, copy the files, make sure to copy the applicable *nix shell script (they are in the download folder), and verify Java is installed and working.

    You can do a vdbench -t (windows) or ./vdbench -t (*nix) to verify that it is working.

    Vdbench File Processing

    There are many options with Vdbench as it has a very robust command and scripting language including ability to set up for loops among other things. We are only going to touch the surface here using its file processing capabilities. Likewise, Vdbench can run from a single server accessing multiple storage systems or file systems, as well as running from multiple servers to a single file system. For simplicity, we will stick with the basics in the following examples to exercise a local file system. The limits on the number of files and file size are limited by server memory and storage space.

    You can specify number and depth of directories to put files into for processing. One of the parameters is the anchor point for the file processing, in the following examples =S:\SIOTEMP\FS1 is used as the anchor point. Other parameters include the I/O size, percent reads, number of threads, run time and sample interval as well as output folder name for the result files. Note that unlike some tools, Vdbench does not create a single file of results, rather a folder with several files including summary, totals, parameters, histograms, CSV among others.


    Simple Vdbench File Processing Commands

    For flexibility and ease of use I put the following three Vdbench commands into a simple text file that is then called with parameters on the command line.
    fsd=fsd1,anchor=!fanchor,depth=!dirdep,width=!dirwid,files=!numfiles,size=!filesize

    fwd=fwd1,fsd=fsd1,rdpct=!filrdpct,xfersize=!fxfersize,fileselect=random,fileio=random,threads=!thrds

    rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=!etime,interval=!itime

    Simple Vdbench script

    # SIO_vdbench_filesystest.txt
    #
    # Example Vdbench script for file processing
    #
    # fanchor = file system place where directories and files will be created
    # dirwid = how wide should the directories be (e.g. how many directories wide)
    # numfiles = how many files per directory
    # filesize = size in in k, m, g e.g. 16k = 16KBytes
    # fxfersize = file I/O transfer size in kbytes
    # thrds = how many threads or workers
    # etime = how long to run in minutes (m) or hours (h)
    # itime = interval sample time e.g. 30 seconds
    # dirdep = how deep the directory tree
    # filrdpct = percent of reads e.g. 90 = 90 percent reads
    # -p processnumber = optional specify a process number, only needed if running multiple vdbenchs at same time, number should be unique
    # -o output file that describes what being done and some config info
    #
    # Sample command line shown for Windows, for *nix add ./
    #
    # The real Vdbench script with command line parameters indicated by !=
    #

    fsd=fsd1,anchor=!fanchor,depth=!dirdep,width=!dirwid,files=!numfiles,size=!filesize

    fwd=fwd1,fsd=fsd1,rdpct=!filrdpct,xfersize=!fxfersize,fileselect=random,fileio=random,threads=!thrds

    rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=!etime,interval=!itime

    Big Files Processing Script


    With the above script file defined, for Big Files I specify a command line such as the following.
    $ vdbench -f SIO_vdbench_filesystest.txt fanchor=S:\SIOTemp\FS1 dirwid=1 numfiles=60 filesize=5G fxfersize=128k thrds=64 etime=10h itime=30 numdir=1 dirdep=1 filrdpct=90 -p 5576 -o SIOWS2012R220_NOFUZE_5Gx60_BigFiles_64TH_STX1200_020116

    Big Files Processing Example Results


    The following is one of the result files from the folder of results created via the above command for Big File processing showing totals.


    Run totals

    21:09:36.001 Starting RD=format_for_rd1

    Feb 01, 2016 .Interval. .ReqstdOps.. ...cpu%... read ....read.... ...write.... ..mb/sec... mb/sec .xfer.. ...mkdir... ...rmdir... ..create... ...open.... ...close... ..delete...
    rate resp total sys pct rate resp rate resp read write total size rate resp rate resp rate resp rate resp rate resp rate resp
    21:23:34.101 avg_2-28 2848.2 2.70 8.8 8.32 0.0 0.0 0.00 2848.2 2.70 0.00 356.0 356.02 131071 0.0 0.00 0.0 0.00 0.1 109176 0.1 0.55 0.1 2006 0.0 0.00

    21:23:35.009 Starting RD=rd1; elapsed=36000; fwdrate=max. For loops: None

    07:23:35.000 avg_2-1200 4939.5 1.62 18.5 17.3 90.0 4445.8 1.79 493.7 0.07 555.7 61.72 617.44 131071 0.0 0.00 0.0 0.00 0.0 0.00 0.1 0.03 0.1 2.95 0.0 0.00


    Lots of Little Files Processing Script


    For lots of little files, the following is used.


    $ vdbench -f SIO_vdbench_filesystest.txt fanchor=S:\SIOTEMP\FS1 dirwid=64 numfiles=25600 filesize=16k fxfersize=1k thrds=64 etime=10h itime=30 dirdep=1 filrdpct=90 -p 5576 -o SIOWS2012R220_NOFUZE_SmallFiles_64TH_STX1200_020116

    Lots of Little Files Processing Example Results


    The following is one of the result files from the folder of results created via the above command for Big File processing showing totals.
    Run totals

    09:17:38.001 Starting RD=format_for_rd1

    Feb 02, 2016 .Interval. .ReqstdOps.. ...cpu%... read ....read.... ...write.... ..mb/sec... mb/sec .xfer.. ...mkdir... ...rmdir... ..create... ...open.... ...close... ..delete...
    rate resp total sys pct rate resp rate resp read write total size rate resp rate resp rate resp rate resp rate resp rate resp
    09:19:48.016 avg_2-5 10138 0.14 75.7 64.6 0.0 0.0 0.00 10138 0.14 0.00 158.4 158.42 16384 0.0 0.00 0.0 0.00 10138 0.65 10138 0.43 10138 0.05 0.0 0.00

    09:19:49.000 Starting RD=rd1; elapsed=36000; fwdrate=max. For loops: None

    19:19:49.001 avg_2-1200 113049 0.41 67.0 55.0 90.0 101747 0.19 11302 2.42 99.36 11.04 110.40 1023 0.0 0.00 0.0 0.00 0.0 0.00 7065 0.85 7065 1.60 0.0 0.00


    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    The above examples can easily be modified to do different things particular if you read the Vdbench documentation on how to setup multi-host, multi-storage system, multiple job streams to do different types of processing. This means you can benchmark a storage systems, server or converged and hyper-converged platform, or simply put a workload on it as part of other testing. There are even options for handling data footprint reduction such as compression and dedupe.

    Ok, nuff said, for now.

    Gs

    Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Server StorageIO January 2016 Update Newsletter

    Volume 16, Issue I – beginning of Year (BoY) Edition

    Hello and welcome to the January 2016 Server StorageIO update newsletter.

    Is it just me, or did January disappear in a flash like data stored in non-persistent volatile DRAM memory when the power is turned off? It seems like just the other day that it was the first day of the new year and now we are about to welcome in February. Needless to say, like many of you I have been busy with various projects, many of which are behind the scenes, some of which will start appearing publicly sooner while others later.

    In terms of what have I been working on, it includes the usual of performance, availability, capacity and economics (e.g. PACE) related to servers, storage, I/O networks, hardware, software, cloud, virtual and containers. This includes NVM as well as NVMe based SSD’s, HDD’s, cache and tiering technologies, as well as data protection among other things with Hyper-V, VMware as well as various cloud services.

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    In This Issue

  • Feature Topic
  • Industry Trends News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Videos and Podcasts
  • Events and Webinars
  • Recommended Reading List
  • Industry Activity Trends
  • Server StorageIO Lab reports
  • New and Old Vendor Update
  • Resources and Links
  • Feature Topic – Microsoft Nano, Server 2016 TP4 and VMware

    This months feature topic is virtual servers and software defined storage including those from VMware and Microsoft. Back in November I mentioned the 2016 Technical Preview 4 (e.g. TP4) along with Storage Spaces Direct and Nano. As a reminder you can download your free trial copy of Windows Server 2016 TP4 from this Microsoft site here.

    Three good Microsoft Blog posts about storage spaces to check out include:

    • Storage Spaces Direct in Technical Preview 4 (here)
    • Hardware options for evaluating Storage Spaces Direct in Technical Preview 4 (here)
    • Storage Spaces Direct – Under the hood with the Software Storage Bus (here)

    As for Microsoft Nano, for those not familiar, it’s not a new tablet or mobile device, instead, it is a very light weight streamlined version of the Windows Server 2016 server. How streamlined? Much more so then the earlier Windows Server versions that simply disabled the GUI and desktop interfaces. Nano is smaller from a memory and disk storage space perspective meaning it uses less RAM, boots faster, has fewer moving parts (e.g. software modules) to break (or need patching).

    Specifically Nano removes 32 bit support and anything related to the desktop and GUI interfaces as well as removing the console interface. That’s right, no console or virtual console to log into, Wow is gone, access is via Powershell or Windows Management Interface tools from remote systems. How small is it? I have a Nano instance built on a VHDX that is under a GB in size, granted, its only for testing. The goal of Nano is to have a very light weight streamlined version of Windows Server that can run hundreds (or more) VMs in a small memory footprint, not to mention supports lots of containers. Nano is part of WIndows TP4, learn more about Nano here in this Microsoft post including how to get started using it.

    Speaking of VMware, if you have not received an invite yet to their Digital Enterprise February 6, 2016 announcement event, click here to register.

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    Some new Products Technology Services Announcements (PTSA) include:

    • EMC announced Elastic Cloud Storage (ECS) V2.2. A main theme of V2.2 is that besides being the 3rd generation of EMC object storage (dating back to Centera, then Atmos), is that ECS is also where the functionality of Centera, Atmos and other functionality converge. ECS provides object storage access along with HDFS (Hadoop and Hortonworks certified) and traditional NFS file access.

      Object storage access includes Amazon S3, OpenStack Swift, ATMOS and CAS (Centera). In addition to the access, added Centera functionality for regulatory compliance has been folded into the ECS software stack. For example, ECS is now compatible with SEC 17 a-4(f) and CFTC 1.3(b)-(c) regulations protecting data from being overwritten or erased for a specified retention period. Other enhancements besides scalability, resiliency and ease of use include meta data and search capabilities. You can download and try ECS for non-production workloads with no capacity or functionality limitations from EMC here.

    View other recent news and industry trends here

    StorageIO Commentary in the news

    StorageIO news (image licensed for use from Shutterstock by StorageIO)
    Recent Server StorageIO commentary and industry trends perspectives about news, activities tips, and announcements. In case you missed them from last month:

    • TheFibreChannel.com: Industry Analyst Interview: Greg Schulz, StorageIO
    • EnterpriseStorageForum: Comments Handling Virtual Storage Challenges
    • PowerMore (Dell): Q&A: When to implement ultra-dense storage

    View more Server, Storage and I/O hardware as well as software trends comments here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    • Datrium – DVX and NetShelf server software defined flash storage and converged infrastructure
    • DataDynamics – StorageX is the software solution for enabling intelligent data migration, including from NetApp OnTap 7 to Clustered OnTap, as well as to and from EMC among other NAS file serving solutions.
    • Paxata – Little and Big Data management solutions

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • InfoStor:  Data Protection Gaps, Some Good, Some Not So Good

    And in case you missed them from last month

    • IronMountain:  5 Noteworthy Data Privacy Trends From 2015
    • Virtual Blocks (VMware Blogs):  Part III EVO:RAIL – When And Where To Use It?
    • InfoStor:  Object Storage Is In Your Future
    • InfoStor:  Water, Data and Storage Analogy

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    NAB (Las Vegas) April 19-20, 2016

    TBA – March 31, 2016

    Redmond Magazine Gridstore (How to Migrate from VMware to Hyper-V) February 25, 2016 Webinar (11AM PT)

    TBA – February 23, 2016

    Redmond Magazine and Dell Foglight – Manage and Solve Virtualization Performance Issues Like a Pro (Webinar 9AM PT) – January 19, 2016

    See more webinars and other activities on the Server StorageIO Events page here.

    From StorageIO Labs

    Research, Reviews and Reports

    Quick Look: What’s the Best Enterprise HDD for a Content Server?
    Which enterprise HDD for content servers

    Insight for Effective Server Storage I/O decision-making
    This StorageIO® Industry Trends Perspectives Solution Brief and Lab Review (compliments of Seagate and Servers Direct) looks at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate (www.seagate.com) Enterprise Hard Disk Drive (HDDs).

    I was given the opportunity to do some hands-on testing running different application workloads with a 2U content solution platform along with various Seagate Enterprise 2.5” HDDs handle different application workloads. This includes Seagate’s Enterprise Performance HDDs with the enhanced caching feature.

    Read more in this Server StorageIO industry Trends Perspective white paper and lab review.

    Looking for NVM including SSD information? Visit the Server StorageIO www.thessdplace.com and www.thenvmeplace.com micro sites. View other StorageIO lab review and test drive reports here.

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of mine as well as books from others. For this months recommended reading, it’s a blog site. If you have not visited Duncan Eppings (@DuncanYB) Yellow-Bricks site, you should, particular if you are interested in virtualization, high availability and related topical themes.

    Seven Databases in Seven Weeks guide to no SQL via Amazon.com

    Granted Duncan being a member of the VMware CTO office covers a lot of VMware related themes, however being the author of several books, he also covers non VMware related topics. Duncan recently did a really good and simple post about rebuilding a failed disk in a VMware VSAN vs. in a legacy RAID or erasure code based storage solution.

    One of the things that struck me as being important with what Duncan wrote about is avoiding apples to oranges comparisons. What I mean by this is that it is easy to compare traditional parity based or mirror type solutions that chunk or shard data on KByte basis spread over disks, vs. data that is chunk or sharded on GByte (or larger) basis over multiple servers and their disks. Anyway, check out Duncan’s site and recent post by clicking here.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageperformance.us
    thenvmeplace
    thessdplace.com
    storageio.com/performance.com
    storageio.com/raid
    storageio.com/ssd

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved