Dell EMC VMware September 2017 Software Defined Data Infrastructure Updates

Dell EMC VMware September 2017 Software Defined Data Infrastructure Updates

server storage I/O data infrastructure trends

Dell EMC VMware September 2017 Software Defined Data Infrastructure Updates

vmworld 2017

September was a busy month including VMworld in Las Vegas that featured many Dell EMC VMware (among other) software defined data infrastructure updates and announcements.

A summary of September VMware (and partner) related announcements include:

VMware on AWS via Amazon.com
VMware and AWS via Amazon Web Services

VMware and AWS

Some of you might recall VMware earlier attempt at public cloud with vCloud Air service (see Server StorageIO lab test drive here) which has since been depreciated (e.g. retired). This new approach by VMware leverages the large global presence of AWS enabling customers to set up public or hybrid vSphere, vSAN and NSX based clouds, as well as software defined data centers (SDDC) and software defined data infrastructures (SDDI).

VMware Cloud on AWS exists on a dedicated, single-tenant (unlike Elastic Cloud Compute (EC2) multi-tenant instances or VMs) that supports from 4 to 16 underlying host per cluster. Unlike EC2 virtual machine instances, VMware Cloud on AWS is delivered on elastic bare-metal (e.g. dedicated private servers aka DPS). Note AWS EC2 is more commonly known, AWS also has other options for server compute including Lambda micro services serverless containers, as well as Lightsail virtual private servers (VPS).

Besides servers with storage optimized I/O featuring low latency NVMe accessed SSDs, and applicable underlying server I/O networking, VMware Cloud on AWS leverages the VMware software stack directly on underlying host servers (e.g. there is no virtualization nesting taking place). This means more robust performance should be expected like in your on premise VMware environment. VM workloads can move between your onsite VMware systems and VMware Cloud on AWS using various tools. The VMware Cloud on AWS is delivered and managed by VMware, including pricing. Learn more about VMware Cloud on AWS here, and here (VMware PDF) and here (VMware Hands On Lab aka HOL).

Read more about AWS September news and related updates here in this StorageIOblog post.

VMware PKS
VMware and Pivotal PKS via VMware.com

Pivotal Container Service (PKS) and Google Kubernetes Partnership

During VMworld VMware, Pivotal and Google announced a partnership for enabling Kubernetes container management called PKS (Pivotal Container Service). Kubernetes is evolving as a popular open source container microservice serverless management orchestration platform that has roots within Google. What this means is that what is good for Google and others for managing containers, is now good for VMware and Pivotal. In related news, VMware has become a platinum sponsor of the Cloud Native Compute Foundation (CNCF). If you are not familiar with CNCF, add it to your vocabulary and learn more here at www.cncf.io.

Other VMworld and September VMware related announcements

Hyper converged data infrastructure provider Maxta has announced a VMware vSphere Escape Pod (parachute not included ;) ) to facilitate migration from ESXi based to Red Hat Linux hypervisor environments. IBM and VMware for cloud partnership, along with Dell EMC, IBM and VMware joint cloud solutions. White listing of VMware vSphere VMs for enhanced security combine with earlier announced capabilities.

Note that both VMware with vSphere ESXi and Microsoft with Hyper-V (Windows and Azure based) are supporting various approaches for securing Virtual Machines (VMs) and the hosts they run on. These enhancements are moving beyond simply encrypting the VMDK or VHDX virtual disks the VMs reside in or use, as well as more than password, ssh and other security measures. For example Microsoft is adding support for host guarded fabrics (and machine hosts) as well as shielded VMs. Keep an eye on how both VMware and Microsoft extend the data protection and security capabilities for software defined data infrastructures for their solutions and services.

Dell EMC Announcements

At VMworld in September Dell EMC announcements included:

  • Hyper Converged Infrastructure (HCI) and Hybrid Cloud enhancements
  • Data Protection, Goverence and Management suite updates
  • XtremIO X2 all flash array (AFA) availability optimized for vSphere and VDI

HCI and Hybrid Cloud enhancements include VxRail Appliance, VxRack SDDC (vSphere 6.5, vSAN 6.6, NSX 6.3) along with hybrid cloud platforms (Enterprise Hybrid Cloud and Native Hybrid Cloud) along with vSAN Ready Nodes (vSAN 6.6 and encryption) and VMware Ready System. Note that Dell EMC in addition to supporting VMware hybrid clouds also previously announced solutions for Microsoft Azure Stack back in May.

Software Defined Data Infrastructure Essentials at VMworld Bookstore

xxxx

Software Defined Data Infrastructure Essentials (CRC Press) at VMworld bookstore

My new book Software Defined Data Infrastructure Essentials (CRC Press) made its public debut in the VMware book store where I did a book signing event. You can get your copy of Software Defined Data Infrastructure Essentials which includes Software Defined Data Centers (SDDC) along with hybrid, multi-cloud, serverless, converged and related topics at Amazon among other venues. Learn more here.

Where To Learn More

Learn more about related technology, trends, tools, techniques, and tips with the following links.

What This All Means

A year ago at VMworld the initial conversations were started around what would become the VMware Cloud on AWS solution. Also a year ago besides VMware Integrated Containers (VIC) and some other pieces, the overall container and in particular related management story was a bit cloudy (pun intended). However, now the fog and cloud seem to be clearing with the PKS solution, along with details of VMware Cloud on AWS. Likewise vSphere, vSAN and NSX along with associated vRealize tools continue to evolve as well as customer deployment growing. All in all, VMware continues to evolve, let’s see how things progress now over the year until the next VMworld.

By the way, if you have not heard, its Blogtober, check out some of the other blogs and posts occurring during October here.

Ok, nuff said, for now.
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Top vBlog 2017 Voting Now Open

server storage I/O trends

Top vBlog 2017 Voting Now Open

It is that time of the year again when Eric Siebert (@ericsiebert) over at vSphere-land holds his annual Top vBlog (e.g. VMware and Virtualization related) voting (vote here until June 30, 2017). The annual Top vBlog event enables fans to vote for their favorite blogs (to get them into the top 10, 25, 50 and 100) as well as rank them for different categories which appear on Eric’s vLaunchPad site.

This years Top vBlog voting is sponsored by TurboNomic (e.g. formerly known as VMturbo) who if you are not aware of, have some interesting technology for cross-platform (cloud, container, virtualization, hardware, software, services) data infrastructure management software tools.

Software Defined Data Infrastructure Management

The blogs and sites listed on Eric’s site have common theme linkage to Virtualization and in particular tend to be more VMware focused, however some are also hybrid agnostic spanning other technologies, vendors, services and tools. Some examples of the different focus areas include hypervisors, VDI, cloud, containers, management tools, scripting, networking, servers, storage, data protection including backup/restore, replication, BC, DR among others).

In addition to the main list of blogs (that are active), there are also sub lists for different categories including:

  • Top 100 (Also top 10, 25, 50) vBlogs
  • Archive of retired (e.g. not active or seldom post)
  • News and Information sites
  • Podcasts
  • Scripting Blogs
  • Storage related
  • Various Virtualization Blogs
  • VMware Corporate Blogs

What To Do

Get out and vote for your favorite (or blogs that you frequent) in appreciation to those who create virtualization, VMware and data infrastructure related content. Click here or on the image above to reach the voting survey site where you will find more information and rules. In summary, select 12 of your favorite or preferred blogs, then rank them from 1 (most favorite) to 12. Then select your favorites for other categories such as Female Blog, Independent, New Blog, News websites, Podcast, Scripting and Storage among others.

Note: You will find my StorageIOblog in the main category (e.g. where you select 12 and then rank), as well as in the Storage, Independent, as well as Podcast categories, and thank you in advance for your continued support.

Which Blogs Do I Recommend (Among Others)

Two of my favorite blogs (and authors) are not included as Duncan Epping (Yellow Bricks) former #1 and Frank Denneman former #4 chose not to take part this year opening the door for some others to move up into the top 10 (or 25, 50 and 100). Of those listed some of my blogs I find valuable include Cormac Hogan of VMware, Demitasse (Alastair Cooke), ESX Virtualization (Vladan Seget), Kendrick Coleman, NTPro.nl (Eric Sloof), Planet VM (Tom Howarth), Virtually Ghetto (William Lam), VM Blog (David Marshall), vsphere-land.com (Eric Siebert) and Wahl Networks (Chris Wahl) among others.

Where to learn more

What this all means

It’s that time of the year again to take a few moments and show some appreciation for your favorite or preferred blogs along with their authors who spend time to create content for those sites. Also check out Turbonomic as they are an interesting technology that I have kept an eye on for some time now and so should you. Thank you all in advance regardless of if you take part in the voting as I also appreciate your continued support by viewing these posts either at StorageIOblog.com site or one of the many downstream sites where you can also read the content.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

May 2017 Server StorageIO Data Infrastructures Update Newsletter

Volume 17, Issue V

Hello and welcome to the May 2017 issue of the Server StorageIO update newsletter.

Summer officially here in the northern hemisphere is still a few weeks away, however for all practical purposes it has arrived. What this means is that in addition to normal workplace activities and projects, there are plenty of outdoor things (as well as distractions) to attend to.

Over the past several months I have mentioned a new book that is due out this summer and which means it’s getting close to announcement time. The new book title is Software Defined Data Infrastructure Essentials – Cloud, Converged, and Virtual Fundamental Server Storage I/O Tradecraft (CRC PRess/Taylor Francis/Auerbach) that you can learn more about here (with more details being added soon). A common question is will there be electronic versions of the book and the answer is yes (more on this in future newsletter).

Data Infrastructures

Another common question is what is it about, what is a data infrastructure (see this post) and what is tradecraft (see this post). Software-Defined Data Infrastructures Essentials provides fundamental coverage of physical, cloud, converged, and virtual server storage I/O networking technologies, trends, tools, techniques, and tradecraft skills.

Software-Defined Data Infrastructures Essentials provides fundamental coverage of physical, cloud, converged, and virtual server storage I/O networking technologies, trends, tools, techniques, and tradecraft skills. From webscale, software-defined, containers, database, key-value store, cloud, and enterprise to small or medium-size business, the book is filled with techniques, and tips to help develop or refine your server storage I/O hardware, software, and services skills. Whether you are new to data infrastructures or a seasoned pro, you will find this comprehensive reference indispensable for gaining as well as expanding experience with technologies, tools, techniques, and trends.

Software-Defined Data Infrastructure Essentials SDDI SDDC
ISBN-13: 978-1498738156
ISBN-10: 149873815X
Hardcover: 672 pages
Publisher: Auerbach Publications; 1 edition (June 2017)
Language: English

Watch for more news and insight about my new book Software-Defined Data Infrastructure Essentials soon. In the meantime, check out the various items below in this edition of the Server StorageIO Update.

In This Issue

Enjoy this edition of the Server StorageIO update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

Some recent Industry Activities, Trends, News and Announcements include:

Flackbox.com has some new independent (non NetApp produced) learning resources including NetApp simulator eBook and MetroCluster tutorial. Over in the Microsoft world, Thomas Maurer has a good piece about Windows Server build 2017 and all about containers. Microsoft also announced SQL Server 2017 CTP 2.1 is now available. Meanwhile here are some my experiences and thoughts from test driving Microsoft Azure Stack.

Speaking of NetApp among other announcements they released a new version of their StorageGrid object storage software. NVMe activity in the industry (and at customer sites) continues to increase with Cavium Qlogic NVMe over Fabric news, along with Broadcom recent NVMe RAID announcements. Keep in mind that if the answer is NVMe, than what are the questions.

Here is a good summary of the recent OpenStack Boston Summit. Storpool did a momentum announcement which for those of you into software defined storage, add Storpool to your watch list. On the VMware front, check out this vSAN 6.6 demo (video) of stretched cluster via Yellow Bricks.

Check out other industry news, comments, trends perspectives here.

 

Server StorageIOblog Posts

Recent and popular Server StorageIOblog posts include:

View other recent as well as past StorageIOblog posts here

Server StorageIO Commentary in the news

Recent Server StorageIO industry trends perspectives commentary in the news.

Via EnterpriseStorageForum: What to Do with Legacy Assets in a Flash Storage World
There is still a place for hybrid arrays. A hybrid array is the home run when it comes to leveraging your existing non-flash, non-SSD based assets today.

Via EnterpriseStorageForum: Where All-Flash Storage Makes No Sense
A bit of flash in the right place can go a long way, and everybody can benefit from at least a some of flash somewhere. Some might say the more, the better. But where you have budget constraints that simply prevent you from having more flash for things such as cold, inactive, or seldom access data, you should explore other options.

Via Bitpipe: Changing With the Times – Protecting VMs(PDF)

Via FedTech: Storage Strategies: Agencies Optimize Data Centers by Focusing on Storage

Via SearchCloudStorage: Dell EMC cloud storage strategy needs to cut through fog

Via SearchStorage: Microsemi upgrades controllers based on HPE technology

Via EnterpriseStorageForum: 8 Data Machine Learning and AI Storage Tips

Via SiliconAngle: Dell EMC announces hybrid cloud platform for Azure Stack

View more Server, Storage and I/O trends and perspectives comments here

Events and Activities

Recent and upcoming event activities.

Sep. 13-15, 2017 – Fujifilm IT Executive Summit – Seattle WA

August 28-30, 2017 – VMworld – Las Vegas

Jully 22, 2017 – TBA

June 22, 2017 – Webinar – GDPR and Microsoft Environments

May 11, 2017 – Webinar – Email Archiving, Compliance and Ransomware

See more webinars and activities on the Server StorageIO Events page here.

Server StorageIO Industry Resources and Links

Useful links and pages:
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

Ok, nuff said, for now.

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

VMware vSAN 6.6 hyper-converged (HCI) software defined data infrastructure

server storage I/O trends

VMware vSAN 6.6 hyper-converged (HCI) software defined data infrastructure

In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the first of a five-part series about VMware vSAN V6.6. Part II (just the speeds feeds please) is located here, part III (reducing cost and complexity) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

VMware vSAN 6.6
Image via VMware

For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

Software-defined data infrastructure

Excuse Me, What is vSAN and who is if for

Some might find it odd having to explain what vSAN is, on the other hand, not everybody is dialed into the VMware world ecosystem, so let’s give them some help, for everybody else, and feel free to jump ahead.

For those not familiar, VMware vSAN is an HCI software-defined storage solution that converges compute (hypervisors and server) with storage space capacity and I/O performance along with networking. Being HCI means that with vSAN as you scale compute, storage space capacity and I/O performance also increases in an aggregated fashion. Likewise, increase storage space capacity and server I/O performance you also get more compute capabilities (along with memory).

For VMware-centric environments looking to go CI or HCI, vSAN offers compelling value proposition leveraging known VMware tools and staff skills (knowledge, experience, tradecraft). Another benefit of vSAN is the ability to select your hardware platform from different vendors, a trend that other CI/HCI vendors have started to offer as well.

CI and HCI data infrastructure

Keep in mind that fast applications need a fast server, I/O and storage, as well as server storage I/O needs CPU along with memory to generate I/O operations (IOPs) or move data. What this all means is that HCI solutions such as VMware vSAN combine or converge the server compute, hypervisors, storage file system, storage devices, I/O and networking along with other functionality into an easy to deploy (and management) turnkey solution.

Learn more about CI and HCI along with who some other vendors are as well as considerations at www.storageio.com/converge. Also, visit VMware sites to find out more about vSphere ESXi hypervisors, vSAN, NSX (Software Defined Networking), vCenter, vRealize along with other tools for enabling SDDC and SDDI.

Give Me the Quick Elevator Pitch Summary

VMware has enhanced vSAN with version 6.6 (V6.6) enabling new functionality, supporting new hardware platforms along with partners, while reducing costs, improving scalability and resiliency for SDDC and SDDI environments. This includes from small medium business (SMB) to mid-market to small medium enterprise (SME) as well as workgroup, departmental along with Remote Office Branch Office (ROBO).

Being a HCI solution, management functions of the server, storage, I/O, networking, hypervisor, hardware, and software are converged to improve management productivity. Also, vSAN integrated with VMware vSphere among other tools enable modern, robust data infrastructure that serves, protect, preserve, secure and stores data along with their associated applications.

Where to Learn More

The following are additional resources to learn more about vSAN and related technologies.

What this all means

Overall a good set of enhancements as vSAN continues its evolution looking back just a few years ago, to where it is today and will be in the future. If you have not looked at vSAN recently, take some time beyond reading this piece to learn some more.

Continue reading more about VMware vSAN 6.6 in part II (just the speeds feeds please) is located here, part III (reducing cost and complexity) located here, part IV (scaling ROBO and data centers today) located here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

VMware vSAN V6.6 Part II (just the speeds feeds features please)

server storage I/O trends

VMware vSAN v6.6 Part II (just the speeds feeds features please)

In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the second of a five-part series about VMware vSAN V6.6. View Part I here, part III (reducing cost and complexity) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

VMware vSAN 6.6
Image via VMware

For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

Just the Speeds and Feeds Please

For those who just want to see the list of what’s new with vSAN V6.6, here you go:

  • Native encryption for data-at-rest
  • Compliance certifications
  • Resilient management independent of vCenter
  • Degraded Disk Handling v2.0 (DDHv2)
  • Smart repairs and enhanced rebalancing
  • Intelligent rebuilds using partial repairs
  • Certified file service & data protection solutions
  • Stretched clusters with local failure protection
  • Site affinity for stretched clusters
  • 1-click witness change for Stretched Cluster
  • vSAN Management Pack for vRealize
  • Enhanced vSAN SDK and PowerCLI
  • Simple networking with Unicast
  • vSAN Cloud Analytics with real-time support notification and recommendations
  • vSAN ConfigAssist with 1-click hardware lifecycle management
  • Extended vSAN Health Services
  • vSAN Easy Install with 1-click fixes
  • Up to 50% greater IOPS for all-flash with optimized checksum and dedupe
  • Support for new next-gen workloads
  • vSAN for Photon in Photon Platform 1.1
  • Day 0 support for latest flash technologies
  • Expanded caching tier choice
  • Docker Volume Driver 1.1

What’s New and Value Proposition of vSAN 6.6

Let’s take a closer look beyond the bullet list of what’s new with vSAN 6.6, as well as perspectives of those features to address different needs. The VMware vSAN proposition is to evolve and enable modernizing data infrastructures with HCI powered by vSphere along with vSAN.

Three main themes or characteristics (and benefits) of vSAN 6.6 include addressing (or enabling):

  • Reducing risk while scaling
  • Reducing cost and complexity
  • Scaling for today and tomorrow

VMware vSAN 6.6 summary
Image via VMware

Reducing risk while scaling

Reducing (or removing) risk while evolving your data infrastructure with HCI including flexibility of choosing among five support hardware vendors along with native security. This includes native security, availability and resiliency enhancements (including intelligent rebuilds) without sacrificing storage efficiency (capacity) or effectiveness (performance productivity), management and choice.

VMware vSAN DaRE
Image via VMware

Dat level Data at Rest Encryption (DaRE) of all vSAN dat objects that are enabled at a cluster level. The new functionality supports hybrid along with all flash SSD as well as stretched clusters. The VMware vSAN DaRE implementation is an alternative to using self-encrypting drives (SEDs) reducing cost, complexity and management activity. All vSAN features including data footprint reduction (DFR) features such as compression and deduplication are supported. For security, vSAN DaRE integrations with compliance key management technologies including those from SafeNet, Hytrust, Thales and Vormetric among others.

VMware vSAN management
Image via VMware

ESXi HTML 5 based host client, along with CLI via ESXCLI for administering vSAN clusters as an alternative in case your vCenter server(s) are offline. Management capabilities include monitoring of critical health and status details along with configuration changes.

VMware vSAN health management
Image via VMware

Health monitoring enhancements include handling of degraded vSAN devices with intelligence proactively detecting impending device failures. As part of the functionality, if a replica of the failing (or possible soon to fail) device exists, vSAN can take action to maintain data availability.

Where to Learn More

The following are additional resources to find out more about vSAN and related technologies.

What this all means

With each new release, vSAN is increasing its feature, functionality, resiliency and extensiveness associated with traditional storage and non-CI or HCI solutions. Continue reading more about VMware vSAN 6.6 in Part I here, part III (reducing cost and complexity) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the Spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

VMware vSAN V6.6 Part III (reducing costs complexity)

server storage I/O trends

VMware vSAN V6.6 Part III (Reducing costs complexity)

In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the third of a five-part series about VMware vSAN V6.6. View Part I here, Part II (just the speeds feeds please) is located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

VMware vSAN 6.6
Image via VMware

For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

Reducing cost and complexity

Reducing your total cost of ownership (TCO) including lower capital expenditures (CapEx) and operating (OPEX). VMware is claiming CapEx and OpEx reduced TCO of 50%. Keep in mind that solutions such as vSAN also can help drive return on investment (ROI) as well as return on innovation (the other ROI) via improved productivity, effectiveness, as well as efficiencies (savings). Another aspect of addressing TCO and ROI includes flexibility leveraging stretched clusters to address HA, BR, BC and DR Availability needs cost effectively. These enhancements include efficiency (and effectiveness e.g. productivity) at scale, proactive cloud analytics, and intelligent operations.

VMware vSAN stretch cluster
Image via VMware

Low cost (or cost-effective) Local, Remote Resiliency and Data Protection with Stretched Clusters across sites. Upon a site failure, vSAN maintains availability is leveraging surviving site redundancy. For performance and productivity effectiveness, I/O traffic is kept local where possible and practical, reducing cross-site network workload. Bear in mind that the best I/O is the one you do not have to do, the second is the one with the least impact.

This means if you can address I/Os as close to the application as possible (e.g. locality of reference), that is a better I/O. On the other hand, when data is not local, then the best I/O is the one involving a local or remote site with least overhead impact to applications, as well as server storage I/O (including networks) resources. Also keep in mind that with vSAN you can fine tune availability, resiliency and data protection to meet various needs by adjusting fault tolerant mode (FTM) to address a different number of failures to tolerate.

server storage I/O locality of reference

Network and cloud friendly Unicast Communication enhancements. To improve performance, availability, and capacity (CPU demand reduction) multicast communications are no longer used making for easier, simplified single site and stretched cluster configurations. When vSAN clusters upgrade to V6.6 unicast is enabled.

VMware vSAN unicast
Image via VMware

Gaining insight, awareness, adding intelligence to avoid flying blind, introducing vSAN Cloud Analytics and Proactive Guidance. Part of a VMware customer, experience improvement program, leverages cloud-based health checks for easy online known issue detection along with relevant knowledge bases pieces as well as other support notices. Whether you choose to refer to this feature as advanced analytics, artificial intelligence (AI), proactive rules enabled management problem isolation, solving resolution I will leave that up to you.

VMware vSAN cloud analytics
Image via VMware

Part of the new tools analytics capabilities and prescriptive problem resolution (hmm, some might call that AI or advanced analytics, just saying), health check issues are identified, notifications along with suggested remediation. Another feature is the ability to leverage continuous proactive updates for advance remediation vs. waiting for subsequent vSAN releases. Net result and benefit are reducing time, the complexity of troubleshooting converged data infrastructure issues spanning servers, storage, I/O networking, hardware, software, cloud, and configuration. In other words, enable you more time to be productive vs. finding and fixing problems leveraging informed awareness for smart decision-making.

Where to Learn More

The following are additional resources to find out more about vSAN and related technologies.

What this all means

Continue reading more about VMware vSAN 6.6 in part I here, part II (just the speeds feeds please) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

VMware vSAN V6.6 Part IV (HCI scaling ROBO and data centers today)

server storage I/O trends

VMware vSAN V6.6 Part IV (HCI scaling ROBO and data centers today)

In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the fourth of a five-part series about VMware vSAN V6.6. View Part I here, Part II (just the speeds feeds please) is located here, part III (reducing cost and complexity) located here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

VMware vSAN 6.6
Image via VMware

For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

Scaling HCI for ROBO and data centers today and for tomorrow

Scaling with stability for today and tomorrow. This includes addressing your applications Performance, Availability, Capacity and Economics (PACE) workload requirements today and for the future. By scaling with stability means boosting performance, availability (data protection, security, resiliency, durable, FTT), effective capacity without one of those attributes compromising another.

VMware vSAN data center scaling
Image via VMware

Scaling today for tomorrow also means adapting to today’s needs while also flexible to evolve with new application workloads, hardware as well as a cloud (public, private, hybrid, inter and intra-cloud). As part of continued performance improvements, enhancements to optimize for higher performance flash SSD including NVMe based devices.

VMware vSAN cloud analytics
Image via VMware

Part of scaling with stability means enhancing performance (as well as productivity) or the effectiveness of a solution. Keep in mind that efficiency is often associated with storage (or server or network) space capacity savings or reductions. In that context then effectiveness means performance and productivity or how much work can be done with least overhead impact. With vSAN, V6.6 performance enhancements include reduced checksum overhead, enhanced compression, and deduplication, along with destaging optimizations.

Other enhancements that help collectively contribute to vSAN performance improvements include VMware object handling (not to be confused with cloud or object storage S3 or Swift objects) as well as faster iSCSI for vSAN. Also improved are more accurate refined cache sizing guidelines. Keep in mind that a little bit of NAND flash SSD or SCM in the right place can have a significant benefit, while a lot of flash cache costs much cash.

Part of enabling and leveraging new technology today includes support for larger capacity 1.6TB flash SSD drives for cache, as well as lower read latency with 3D XPoint and NVMe drives such as those from Intel among others. Refer to the VMware vSAN HCL for current supported devices which continue evolve along with the partner ecosystem. Future proofing is also enabled where you can grow from today to tomorrow as new storage class memories (SCM) among other flash SSD as well as NVMe enhanced storage among other technologies are introduced into the market as well as VMware vSAN HCL.

VMware vSAN and data center class applications
Image via VMware

Traditional CI and in particular many HCI solutions have been optimized or focused on smaller application workloads including VDI resulting in the perception that HCI, in general, is only for smaller environments, or larger environment non-mission critical workloads. With vSAN V6.6 VMware is addressing and enabling larger environment mission critical applications including Intersystem Cache medical health management software among others. Other application workload extensions including support for higher performance demanding Hadoop big data analytics, a well as extending virtual desktop infrastructure (VDI) workspace with XenDesktop/XenApp, along with Photon 1.1 container support.

What about VMware vSAN 6.6. Packaging and License Options

As part of vSAN 6.6 VMware several solution bundle packaged options for the data center as well as smaller ROBO environment. Contact your VMware representative or partner to learn more about specific details.

VMware vSAN cloud analytics
Image via VMware

VMware vSAN cloud analytics
Image via VMware

Where to Learn More

The following are additional resources to find out more about vSAN and related technologies.

What this all means

Continue reading more about VMware vSAN 6.6 in part I here, part II (just the speeds feeds please) is located here, part III (reducing cost and complexity) located here as well as part V here (VMware vSAN evolution, where to learn more and summary).

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the Spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

VMware vSAN V6.6 Part V (vSAN evolution and summary)

server storage I/O trends

VMware vSAN V6.6 Part V (vSAN evolution and summary)

In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the fifth of a five-part series about VMware vSAN V6.6. View Part I here, Part II (just the speeds feeds please) is located here, part III (reducing cost and complexity) found here, part IV (scaling ROBO and data centers today) located here.

VMware vSAN 6.6
Image via VMware

For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

How has vSAN (formerly referred to as VSAN) Evolved

A quick recap of the VMware vSAN progression which first appeared as part of vSphere 5.5. (e.g. vSAN 5.5 can be thought of 1.0 in some ways) consists of several releases. Since vSAN is tightly integrated with VMware vSphere along with associated management tools, there is a correlation between enhancements to the underlying hypervisor, and added vSAN functionality. Keep in mind sometimes by seeing where something has been, helps to view where going.

Previous vSAN enhancements include:

  • 5.5 Hybrid (mixed HDD and flash)
  • 6.2 (2016) All flash (e.g. AFA) versions included data footprint reduction (DFR) technologies such as compression and dedupe along with performance Quality of Service (QoS) enhancements.
  • 6.5 Cross Cloud functionality including the announcement of container support, cloud-native apps, as well as upcoming vSphere, vSAN, NSX and other VMware software-defined data center (SDDC) and software-defined data infrastructure (SDDI) technology running natively on AWS (not on EC2) cloud infrastructure.
  • 6.6 Modern data infrastructure flexibility, scalability, resiliency, extensibility including performance, availability, capacity and economics (PACE).

V5.5

  • Distributed RAID
  • Per-VM SPBM
  • Set and change FTT via policy
  • In-kernel hyper-convergence engine
  • RVC and Observer

V6.0

  • All-flash architecture
  • Perf improvements (4xIOPS)
  • 64-node support
  • High-density storage blades
  • Fault domain awareness
  • Scalable snapshots and clones
  • Disk enclosure management

V6.1

  • Windows Failover Clustering
  • Oracle RAC support
  • HW checksum and encryption
  • 2-node ROBO mode
  • UltraDIMM and NVMe support
  • Stretch clusters
  • 5 min RPO (vSphere Rep)
  • SMP-FT support
  • Health Check, vROps, Log Insight

V6.2

  • IPv6 support
  • Software checksum
  • Nearline dedupe and compression on all-flash
  • Erasure coding on all-flash
  • QoS IOPS limits
  • Performance monitoring service

V6.5

  • iSCSI
  • 2-Node direct connect
  • PowerCLI
  • Public APIs and SDK
  • 512e support
  • All-Flash to all editions

Where to Learn More

The following are additional resources to find out more about vSAN and related technologies.

What this all means, wrap up and summary

VMware continues to extend the software-defined data center (SDDC) and Software-Defined Data Infrastructure (SDDI) ecosystem with vSAN to address the needs from smaller SMB and ROBO environments to larger SME and enterprise workloads. To me a theme with V6.6 is expanding resiliency, scalability with stability to expand vSAN upmarket as well as into new workloads similar to how vSphere has evolved.

With each new release, vSAN is increasing its feature, functionality, resiliency and extensiveness associated with traditional storage and non-CI or HCI solutions. Overall a good set of enhancements as vSAN continues its evolution looking back just a few years ago, to where it is today and will be in the future. If you have not looked at vSAN recently, take some time beyond reading this piece to learn some more.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the Spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Server Storage I/O Benchmark Performance Resource Tools

Server Storage I/O Benchmarking Performance Resource Tools

server storage I/O trends

Updated 1/23/2018

Server storage I/O benchmark performance resource tools, various articles and tips. These include tools for legacy, virtual, cloud and software defined environments.

benchmark performance resource tools server storage I/O performance

The best server and storage I/O (input/output operation) is the one that you do not have to do, the second best is the one with the least impact.

server storage I/O locality of reference

This is where the idea of locality of reference (e.g. how close is the data to where your application is running) comes into play which is implemented via tiered memory, storage and caching shown in the figure above.

Cloud virtual software defined storage I/O

Server storage I/O performance applies to cloud, virtual, software defined and legacy environments

What this has to do with server storage I/O (and networking) performance benchmarking is keeping the idea of locality of reference, context and the application workload in perspective regardless of if cloud, virtual, software defined or legacy physical environments.

StorageIOblog: I/O, I/O how well do you know about good or bad server and storage I/Os?
StorageIOblog: Server and Storage I/O benchmarking 101 for smarties
StorageIOblog: Which Enterprise HDDs to use for a Content Server Platform (7 part series with using benchmark tools)
StorageIO.com: Enmotus FuzeDrive MicroTiering lab test using various tools
StorageIOblog: Some server storage I/O benchmark tools, workload scripts and examples (Part I) and (Part II)
StorageIOblog: Get in the NVMe SSD game (if you are not already)
Doridmen.com: Transcend SSD360S Review with tips on using ATTO and Crystal benchmark tools
ComputerWeekly: Storage performance metrics: How suppliers spin performance specifications

Via StorageIO Podcast: Kevin Closson discusses SLOB Server CPU I/O Database Performance benchmarks
Via @KevinClosson: SLOB Use Cases By Industry Vendors. Learn SLOB, Speak The Experts’ Language
Via BeyondTheBlocks (Reduxio): 8 Useful Tools for Storage I/O Benchmarking
Via CCSIObench: Cold-cache Sequential I/O Benchmark
Doridmen.com: Transcend SSD360S Review with tips on using ATTO and Crystal benchmark tools
CISJournal: Benchmarking the Performance of Microsoft Hyper-V server, VMware ESXi and Xen Hypervisors (PDF)
Microsoft TechNet:Windows Server 2016 Hyper-V large-scale VM performance for in-memory transaction processing
InfoStor: What’s The Best Storage Benchmark?
StorageIOblog: How to test your HDD, SSD or all flash array (AFA) storage fundamentals
Via ATTO: Atto V3.05 free storage test tool available
Via StorageIOblog: Big Files and Lots of Little File Processing and Benchmarking with Vdbench

Via StorageIO.com: Which Enterprise Hard Disk Drives (HDDs) to use with a Content Server Platform (White Paper)
Via VMware Blogs: A Free Storage Performance Testing Tool For Hyperconverged
Microsoft Technet: Test Storage Spaces Performance Using Synthetic Workloads in Windows Server
Microsoft Technet: Microsoft Windows Server Storage Spaces – Designing for Performance
BizTech: 4 Ways to Performance-Test Your New HDD or SSD
EnterpriseStorageForum: Data Storage Benchmarking Guide
StorageSearch.com: How fast can your SSD run backwards?
OpenStack: How to calculate IOPS for Cinder Storage ?
StorageAcceleration: Tips for Measuring Your Storage Acceleration

server storage I/O STI and SUT

Spiceworks: Determining HDD SSD SSHD IOP Performance
Spiceworks: Calculating IOPS from Perfmon data
Spiceworks: profiling IOPs

vdbench server storage I/O benchmark
Vdbench example via StorageIOblog.com

StorageIOblog: What does server storage I/O scaling mean to you?
StorageIOblog: What is the best kind of IO? The one you do not have to do
Testmyworkload.com: Collect and report various OS workloads
Whoishostingthis: Various SQL resources
StorageAcceleration: What, When, Why & How to Accelerate Storage
Filesystems.org: Various tools and links
StorageIOblog: Can we get a side of context with them IOPS and other storage metrics?

flash ssd and hdd

BrightTalk Webinar: Data Center Monitoring – Metrics that Matter for Effective Management
StorageIOblog: Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy
StorageIOblog: Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?

server storage I/O bottlenecks and I/O blender

Microsoft TechNet: Measuring Disk Latency with Windows Performance Monitor (Perfmon)
Via Scalegrid.io: How to benchmark MongoDB with YCSB? (Perfmon)
Microsoft MSDN: List of Perfmon counters for sql server
Microsoft TechNet: Taking Your Server’s Pulse
StorageIOblog: Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?
CMG: I/O Performance Issues and Impacts on Time-Sensitive Applications

flash ssd and hdd

Virtualization Practice: IO IO it is off to Storage and IO metrics we go
InfoStor: Is HP Short Stroking for Performance and Capacity Gains?
StorageIOblog: Is Computer Data Storage Complex? It Depends
StorageIOblog: More storage and IO metrics that matter
StorageIOblog: Moving Beyond the Benchmark Brouhaha
Yellow-Bricks: VSAN VDI Benchmarking and Beta refresh!

server storage I/O benchmark example

YellowBricks: VSAN performance: many SAS low capacity VS some SATA high capacity?
YellowBricsk: VSAN VDI Benchmarking and Beta refresh!
StorageIOblog: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review
StorageIOblog: Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review
StorageIOblog: Server Storage I/O Network Benchmark Winter Olympic Games

flash ssd and hdd

VMware VDImark aka View Planner (also here, here and here) as well as VMmark here
StorageIOblog: SPC and Storage Benchmarking Games
StorageIOblog: Speaking of speeding up business with SSD storage
StorageIOblog: SSD and Storage System Performance

Hadoop server storage I/O performance
Various Server Storage I/O tools in a hadoop environment

Michael-noll.com: Benchmarking and Stress Testing an Hadoop Cluster With TeraSort, TestDFSIO
Virtualization Practice: SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD
StorageIOblog: Storage and IO metrics that matter
InfoStor: Storage Metrics and Measurements That Matter: Getting Started
SilvertonConsulting: Storage throughput vs. IO response time and why it matters
Splunk: The percentage of Read / Write utilization to get to 800 IOPS?

flash ssd and hdd
Various server storage I/O benchmarking tools

Spiceworks: What is the best IO IOPs testing tool out there
StorageIOblog: How many IOPS can a HDD, HHDD or SSD do?
StorageIOblog: Some Windows Server Storage I/O related commands
Openmaniak: Iperf overview and Iperf.fr: Iperf overview
StorageIOblog: Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)
Quest: SQL Server Perfmon Poster (PDF)
Server and Storage I/O Networking Performance Management (webinar)
Data Center Monitoring – Metrics that Matter for Effective Management (webinar)
Flash back to reality – Flash SSD Myths and Realities (Industry trends & benchmarking tips), (MSP CMG presentation)
DBAstackexchange: How can I determine how many IOPs I need for my AWS RDS database?
ITToolbox: Benchmarking the Performance of SANs

server storage IO labs

StorageIOblog: Dell Inspiron 660 i660, Virtual Server Diamond in the rough (Server review)
StorageIOblog: Part II: Lenovo TS140 Server and Storage I/O Review (Server review)
StorageIOblog: DIY converged server software defined storage on a budget using Lenovo TS140
StorageIOblog: Server storage I/O Intel NUC nick knack notes First impressions (Server review)
StorageIOblog & ITKE: Storage performance needs availability, availability needs performance
StorageIOblog: Why SSD based arrays and storage appliances can be a good idea (Part I)
StorageIOblog: Revisiting RAID storage remains relevant and resources

Interested in cloud and object storage visit our objectstoragecenter.com page, for flash SSD checkout storageio.com/ssd page, along with data protection, RAID, various industry links and more here.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Watch for additional links to be added above in addition to those that appear via comments.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

DIY converged server software defined storage on a budget using Lenovo TS140

Attention DIY Converged Server Storage Bargain Shoppers

Software defined storage on a budget with Lenovo TS140

server storage I/O trends

Recently I put together a two-part series of some server storage I/O items to get a geek for a gift (read part I here and part II here) that also contain items that can be used for accessorizing servers such as the Lenovo ThinkServer TS140.

Image via Lenovo.com

Likewise I have done reviews of the Lenovo ThinkServer TS140 in the past which included me liking them and buying some (read the reviews here and here), along with a review of the larger TD340 here.

Why is this of interest

Do you need or want to do a Do It Yourself (DIY) build of a small server compute cluster, or a software defined storage cluster (e.g. scale-out), or perhaps a converged storage for VMware VSAN, Microsoft SOFS or something else?

Do you need a new server, second or third server, or expand a cluster, create a lab or similar and want the ability to tailor your system without shopping or a motherboard, enclosure, power supply and so forth?

Are you a virtualization or software defined person looking to create a small VMware Virtual SAN (VSAN) needing three or more servers to build a proof of concept or personal lab system?

Then the TS140 could be a fit for you.

storage I/O Lenovo TS140
Image via StorageIOlabs, click to see review

Why the Lenovo TS140 now?

Recently I have seen a lot of site traffic on my site with people viewing my reviews of the Lenovo TS140 of which I have a few. In addition have got questions from people via comments section as well as elsewhere about the TS140 and while shopping at Amazon.com for some other things, noticed that there were some good value deals on different TS140 models.

I tend to buy the TS140 models that are bare bones having power supply, enclosure, CD/DVD, USB ports, power supply and fan, processor and minimal amount of DRAM memory. For processors mine have the Intel E3-1225 v3 which are quad-core and that have various virtualization assist features (e.g. good for VMware and other hypervisors).

What I saw on Amazon the other day (also elsewhere) were some Intel i3-4130 dual core based systems (these do not have all the virtualization features, just the basics) in a bare configuration (e.g. no Hard Disk Drive (HDD), 4GB DRAM, processor, mother board, power supply and fan, LAN port and USB with a price of around $220 USD (your price may vary depending on timing, venue, prime or other membership and other factors). Not bad for a system that you can tailor to your needs. However what also caught my eye were the TS140 models that have the Intel E3-1225 v3 (e.g. quad core, 3.2Ghz) processor matching the others I have with a price of around $330 USD including shipping (your price will vary depending on venue and other factors).

What are some things to be aware of?

Some caveats of this solution approach include:

  • There are probably other similar types of servers, either by price, performance, or similar
  • Compare apples to apples, e.g. same or better processor, memory, OS, PCIe speed and type of slots, LAN ports
  • Not as robust of a solution as those you can find costing tens of thousands of dollars (or more)
  • A DIY system which means you select the other hardware pieces and handle the service and support of them
  • Hardware platform approach where you choose and supply your software of choice
  • For entry-level environments who have floor-space or rack-space to accommodate towers vs. rack-space or other alternatives
  • Software agnostic Based on basically an empty server chassis (with power supplies, motherboard, power supplies, PCIe slots and other things)
  • Possible candidate for smaller SMB (Small Medium Business), ROBO (Remote Office Branch Office), SOHO (Small Office Home Office) or labs that are looking for DIY
  • A starting place and stimulus for thinking about doing different things

What could you do with this building block (e.g. server)

Create a single or multi-server based system for

  • Virtual Server Infrastructure (VSI) including KVM, Microsoft Hyper-V, VMware ESXi, Xen among others
  • Object storage
  • Software Defined Storage including Datacore, Microsoft SOFS, Openstack, Starwind, VMware VSAN, various XFS and ZFS among others
  • Private or hybrid cloud including using Openstack among other software tools
  • Create a hadoop big data analytics cluster or grid
  • Establish a video or media server, use for gaming or a backup (data protection) server
  • Update or expand your lab and test environment
  • General purpose SMB, ROBO or SOHO single or clustered server

VMware VSAN server storageIO example

What you need to know

Like some other servers in this class, you need to pay attention to what it is that you are ordering, check out the various reviews, comments and questions as well as verify the make, model along with configuration. For example what is included and what is not included, warranty, return policy among other things. In the case of some of the TS140 models, they do not have a HDD, OS, keyboard, monitor, mouse along with different types of processors and memory. Not all the processors are the same, pay attention, visit the Intel Ark site to look up a specific processor configuration to see if it fits your needs as well as visit the hardware compatibility list (HCL) for the software that you are planning to use. Note that these should be best practices regardless of make, model, type or vendor for server, storage, I/O networking hardware and software.

What you will need

This list assumes that you have obtained a model without a HDD, keyboard, video, mouse or operating system (OS) installed

  • Update your BIOS if applicable, check the Lenovo site
  • Enable virtualization and other advanced features via your BIOS
  • Software such as an Operating System (OS), hypervisor or other distribution (load via USB or CD/DVD if present)
  • SSD, SSHD/HHDD, HDD or USB flash drive for installing OS or other software
  • Keyboard, video, mouse (or a KVM switch)

What you might want to add (have it your way)

  • Keyboard, video mouse or a KVM switch (See gifts for a geek here)
  • Additional memory
  • Graphics card, GPU or PCIe riser
  • Additional SSD, SSHD/HHDD or HDD for storage
  • Extra storage I/O and networking ports

Extra networking ports

You can easily add some GbE (or faster ports) including use the PCIe x1 slot, or use one of the other slots for a quad port GbE (or faster), not to mention get some InfiniBand single or dual port cards such as the Mellanox Connectx II or Connect III that support QDR and can run in IBA or 10GbE modes. If you only have two or three servers in a cluster, grid, ring configuration you can run point to point topologies using InfiniBand (and some other network interfaces) without using a switch, however you decide if you need or want switched or non-switched (I have a switch). Note that with VMware (and perhaps other hypervisors or OS) you may need to update the drives for the Realtek GbE LAN on Motherboard port (see links below).

Extra storage ports

For extra storage space capacity (and performance) you can easily add PCIe G2 or G3 HBAs (SAS, SATA, FC, FCoE, CNA, UTA, IBA for SRP, etc) or RAID cards among others. Depending on your choice of cards, you can then attach to more internal storage, external storage or some combination with different adapters, cables, interposers and connectivity options. For example I have used TS140s with PCIe Gen 3 12Gbs SAS HBAs attached to 12Gbs SAS SSDs (and HDDs) with the ability to drive performance to see what those devices are capable of doing.

TS140 Hardware Defined My Way

As an example of how a TS140 can be configured, using one of the base E3-1224 v3 models with 4GB RAM, no HDD (e.g around $330 USD, your price will vary), add a 4TB Seagate HDD (or two or three) for around $140 USD each (your price will vary), add a 480GB SATA SSD for around $340 USD (your price will vary) with those attached to the internal SATA ports. To bump up network performance, how about a Mellanox Connectx II dual port QDR IBA/10GbE card for around $140 USD (your price will vary), plus around $65 USD for QSFP cable (you your price will vary), and some extra memory (use what you have or shop around) and you have a platform ready to go for around or under $1,000 USD. Add some more internal or external disks, bump up the memory, put in some extra network adapters and your price will go up a bit, however think about what you can have for a robust not so little system. For you VMware vgeeks, think about the proof of concept VSAN that you can put together, granted you will have to do some DIY items.

Some TS140 resources

Lenovo TS140 resources include

  • TS140 StorageIOlab review (here and here)
  • TS140 Lenovo ordering website
  • TS140 Data and Spec Sheet (PDF here)
  • Lenovo ThinkServer TS140 Manual (PDF here) and (PDF here)
  • Intel E3-1200 v3 processors capabilities (Web page here)
  • Enabling Virtualization Technology (VT) in TS140 BIOS (Press F1) (Read here)
  • Enabling Intel NIC (82579LM) GbE with VMware (Link to user forum and a blog site here)

Image via Lenovo.com

What this all means

Like many servers in its category (price, capabilities, abilities, packaging) you can do a lot of different things with them, as well as hardware define with accessories, or use your own software. Depending on how you end how hardware defining the TS140 with extra memory, HDDs, SSDs, adapters or other accessories and software your cost will vary. However you can also put together a pretty robust system without breaking your budget while meeting different needs.

Is this for everybody? Nope

Is this for more than a lab, experimental, hobbyist, gamer? Sure, with some caveats Is this apples to apples comparison vs. some other solutions including VSANs? Nope, not even close, maybe apples to oranges.

Do I like the TS140? Yup, starting with a review I did about a year ago, I liked it so much I bought one, then another, then some more.

Are these the only servers I have, use or like? Nope, I also have systems from HP and Dell as well as test drive and review others

Why do I like the TS140? It’s a value for some things which means that while affordable (not to be confused with cheap) it has features, salability and ability to be both hardware defined for what I want or need to use them as, along with software define them to be different things. Key for me is the PCIe Gen 3 support with multiple slots (and types of slots), reasonable amount of memory, internal housing for 3.5" and 2.5" drives that can attach to on-board SATA ports, media device (CD/DVD) if needed, or remove to use for more HDDs and SSDs. In other words, it’s a platform that instead of shopping for the motherboard, an enclosure, power supply, processor and related things I get the basics, then configure, and reconfigure as needed.

Another reason I like the TS140 is that I get to have the server basically my way, in that I do not have to order it with a smallest number of HDDs, or that it comes with an OS, more memory than needed or other things that I may or may not be able to use. Granted I need to supply the extra memory, HDDs, SSDs, PCIe adapters and network ports along with software, however for me that’s not too much of an issue.

What don’t I like about the TS140? You can read more about my thoughts on the TS140 in my review here, or its bigger sibling the TD340 here, however I would like to see more memory slots for scaling up. Granted for what these cost, it’s just as easy to scale-out and after all, that’s what a lot of software defined storage prefers these days (e.g. scale-out).

The TS140 is a good platform for many things, granted not for everything, that’s why like storage, networking and other technologies there are different server options for various needs. Exercise caution when doing apples to oranges comparison on price alone, compare what you are getting in terms of processor type (and its functionality), expandable memory, PCIe speed, type and number of slots, LAN connectivity and other features to meet your needs or requirements. Also keep in mind that some systems might be more expensive that include a keyboard, HDD with an OS installed that if you can use those components, then they have value and should be factored into your cost, benefit, return on investment.

And yes, I just added a few more TS140s that join other recent additions to the server storageIO lab resources…

Anybody want to guess what I will be playing with among other things during the up coming holiday season?

Ok, nuff said, for now…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

August 2014 Server and StorageIO Update newsletter




Welcome to the August 2014 edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization, software defined and data infrastructure topics. This past week I along with around 22,000 others attended VMworld 2014 in San Francisco. For those of you in Europe, VMworld Barcelona is October 14-16 2014 with registration and more information found here. Watch for more post VMworld coverage in upcoming newsletters, articles, posts along with other industry trend topics. Enjoy this edition of the StorageIO Update newsletter and look forward to catching up with you live or online while out and about this fall.

Greg Schulz Storage I/OGreg Schulz @StorageIO

August 2014 Industry trend and perspectives

StorageIO Industry Trends and Perspectives

The following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes.

StorageIO comments and perspectives in the news

StorageIO in the news

Virtual Desktop Infrastructures (VDI) remains a popular industry and IT customer topic, not to mention being one of the favorite themes of Solid State Device (SSD) vendors. SSD component and system solution vendors along with their supporters love VDI as the by-product of aggregation (e.g. consolidation) which applies to VDI is aggravation. Aggravation is the result of increased storage I/O performance (IOP’s, bandwidth, response time) from consolidating the various desktops. It should not be a surprise that some of the biggest fans encouraging organizations to adopt VDI are the SSD vendors. Read some of my comments and perspectives on VDI here at FedTech Magazine.

Speaking of virtualizing the data center, software defined data centers (SDDC) along with software defined networking (SDN) and software defined storage (SDS) remain popular including some software defined marketing (SDM). Here are some of my comments and perspectives moving beyond the hype of SDDC.

FCIA Fibre Channel Industry Association

Recently the Fibre Channel Industry Association (FCIA) who works with the T11 standards body of both legacy or classic Fibre Channel (FC) as well as newer FC over Ethernet (FCoE) made some announcements. These announcements including enhancements such as Fibre Channel Back Bone version 6 (FC-BB-6) among others. Both FC and FCoE are alive and doing well, granted one has been around longer (FC) and can be seen at its plateau while the other (FCoE) continues to evolve and grow in adoption. In some ways, FCoE is in a similar role today to where FC was in the late 90s and early 2000s ironically facing some common fud. You can read my comments here as part of a quote in support of the announcement , along with my more industry trend perspectives in this blog post here.

Buyers guides are popular with both vendors, VAR’s as well as IT organizations (e.g. customers) following are some of my comments and industry trend perspectives appearing in Enterprise Storage Forum. Here are perspectives on buyers guides for Enterprise File Sync and Share (EFSS), Unified Data Storage and Object Storage. EMC has come under pressure as mentioned in earlier StorageIO update newsletters to increase its shareholder benefit including spin-off of VMware. Here are some of my comments and perspectives that appeared in CruxialCIO. Read more industry trends perspectives comments on the StorageIO news page.

StorageIO video and audio pod casts

StorageIOblog postStorageIOblog post
StorageIO audio podcasts are also available via
and at StorageIO.tv

StorageIOblog posts and perspectives

StorageIOblog post

Despite being declared dead, traditional or classic Fibre Channel (FC) along with FC over Ethernet (FCoE) continues to evolve with FC-BB-6, read more here.

VMworld 2014 took place this past week and included announcements about EVO:Rack and Rail (more on this in a future edition). You can get started learning about EVO:Rack and RAIL at Duncan Epping (aka @DuncanYB) Yellow Bricks site. VMware Virtual SAN (VSAN) is at the heart of EVO which you can read an overview here in this earlier StorageIO update newsletter (March 2014).

VMware VSAN
VMware VSAN example

Also watch for some extra content that I’m working on including some video podcasts articles and blog posts from my trip to VMworld 2014. However one of the themes in the background of VMworld 2014 is the current beta of VMware vSphere V6 along with Virtual Volumes aka VVOL’s. The following are a couple of my recent posts including primer overview of VVOL’s along with a poll you can cast your vote. Check out Are VMware VVOL’s in your virtual server and storage I/O future? and VMware VVOL’s and storage I/O fundamentals (Part 1) along with (Part 2).

StorageIO events and activities

Server and StorageIO seminars, conferences, web cats, events, activities

The StorageIO calendar continues to evolve including several new events being added for September and well into the fall with more in the works including upcoming Dutch European sessions the week of October 6th in Nijkerk Holland (learn more here). The following are some upcoming September events. These include live in-person seminars, conferences, keynote and speaking activities as well as on-line webinars, twitter chats, Google+ hangouts among others.

Sep 25 2014MSP CMGServer and StorageIO SSD industry trends perspectives and tipsTBA
9:30AM CT
Sep 18 2014InfoWorldHybrid Storage In GovernmentWebinar
2:30PM ET
Sep 18 2014Converged Storage and Storage ConvergenceWebinar
9AM PT
Sep 17 2014Data Center ConvergenceWebinar
1PM PT
Sep 16 2014Critical Infrastructure and Disaster RecoveryWebinar
Noon PT
Sep 16 2014Starwind SoftwareSoftware Defined Storage and Virtual SAN for Microsoft environmentsWebinar
1PM CT
Sep 16 2014Dell BackupUExploring the Data Protection Toolbox – Data and Application ReplicationGoogle+
9AM PT
Sep 2 2014Dell BackupUExploring the Data Protection Toolbox – Data and Application ReplicationOnline Webinar
11AM PT

Note: Dates, times, venues and subject contents subject to change, refer to events page for current status

Click here to view other upcoming along with earlier event activities. Watch for more 2014 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, software defined, big data, little data, cloud and object storage, performance and management trends among others.

Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events.

Server and StorageIO Technology Tips and Tools

Server and StorageIO seminars, conferences, web cats, events, activities

In addition to the industry trends and perspectives comments in the news mentioned above, along with the StorageIO blog posts, the following are some of my recent articles and tips that have appeared in various industry venues.

Storage Acceleration

Over at the new Storage Acceleration site I have a couple of pieces, the first is What, When, Why & How to Accelerate Storage and the other is Tips for Measuring Your Storage Acceleration.
Meanwhile over at Search Storage I have a piece covering What is the difference between a storage snapshot and a clone? and at Search Cloud Storage some tips about  What’s most important to know about my cloud privacy policy?. Also with Software Defined in the news and a popular industry topic, I have a piece over at Enterprise Storage Forum looking at  Has Software Defined Jumped the Shark? Check out these and others on the StorageIO tips and articles page.

StorageIO Update Newsletter Archives

Click here to view earlier StorageIO Update newsletters (HTML and PDF versions) at www.storageio.com/newsletter. Subscribe to this newsletter (and pass it along) by clicking here (Via Secure Campaigner site). View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

VMware VVOLs and storage I/O fundementals (Part 2)

VMware VVOL’s and storage I/O fundamentals (Part II)

Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g.VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).

Picking up from where we left off in the first part of the VMware VVOL’s and storage I/O fundamentals, lets take a closer look at VVOL’s.

First however lets be clear that while VMware uses terms including object and object storage in the context of VVOL’s, its not the same as some other object storage solutions. Learn more about object storage here at www.objectstoragecenter.com

Are VVOL’s accessed like other object storage (e.g. S3)?

No, VVOL’s are accessed via the VMware software and associated API’s that are supported by various storage providers. VVOL’s are not LUN’s like regular block (e.g. DAS or SAN) storage that use SAS, iSCSI, FC, FCoE, IBA/SRP, nor are they NAS volumes like NFS mount points. Likewise VVOL’s are not accessed using any of the various object storage access methods mentioned above (e.g. AWS S3, Rest, CDMI, etc) instead they are an application specific implementation. For some of you this approach of an applications specific or unique storage access method may be new, perhaps revolutionary, otoh, some of you might be having a DejaVu moment right about now.

VVOL is not a LUN in the context of what you may know and like (or hate, even if you have never worked with them), likewise it is not a NAS volume like you know (or have heard of), neither are they objects in the context of what you might have seen or heard such as S3 among others.

Keep in mind that what makes up a VMware virtual machine are the VMK, VMDK and some other files (shown in the figure below), and if enough information is known about where those blocks of data are or can be found, they can be worked upon. Also keep in mind that at least near-term, block is the lowest common denominator that all file systems and object repositories get built-up.

VMware ESXi basic storage I/O
VMware ESXi storage I/O, IOPS and data store basics

Here is the thing, while VVOL’s will be accessible via a block interface such as iSCSI, FC or FCoE or for that matter, over Ethernet based IP using NFS. Think of these storage interfaces and access mechanisms as the general transport for how vSphere ESXi will communicate with the storage system (e.g. their data path) under vCenter management.

What is happening inside the storage system that will be presented back to ESXi will be different than a normal SCSI LUN contents and only understood by VMware hypervisor. ESXi will still tell the storage system what it wants to do including moving blocks of data. The storage system however will have more insight and awareness into the context of what those blocks of data mean. This is how the storage systems will be able to more closely integrate snapshots, replication, cloning and other functions by having awareness into which data to move, as opposed to moving or working with an entire LUN where a VMDK may live. Keep in mind that the storage system will still function as it normally would, just think of VVOL as another or new personality and access mechanism used for VMware to communicate and manage storage.

VMware VVOL basics
VMware VVOL concepts (in general) with VMDK being pushed down into the storage system

Think in terms of the iSCSI (or FC or something else) for block or NFS for NAS as being the addressing mechanism to communicate between ESXi and the storage array, except instead of traditional SCSI LUN access and mapping, more work and insight is pushed down into the array. Also keep in mind that with a LUN, it is simply an address from what to use Logical Block Numbers or Logical Block Addresses. In the case of a storage array, it in turn manages placement of data on SSD or HDDs in turn using blocks aka LBA/LBN’s In other words, a host that does not speak VVOL would get an error if trying to use a LUN or target on a storage system that is a VVOL, that’s assuming it is not masked or hidden ;).

What’s the Storage Provider (SP)

The Storage Provider aka SP is created by the, well, the provider of the storage system or appliance leveraging a VMware API (hint, sign up for the beta and there is an SDK). Simply put, the SP is a two-way communication mechanism leveraging VASA for reporting information, configuration and other insight up to VMware ESXi hypervisor, vCenter and other management tools. In addition the storage provider receives VASA configuration information from VMware about how to configure the storage system (e.g. storage containers). Keep in mind that the SP is the out of band management interface between the storage system supporting and presenting VVOL’s and VMware hypervisors.

What’s the Storage Container (SC)

This is a storage pool created on the storage array or appliance (e.g. VMware vCenter works with array and storage provider (SP) to create) in place of using a normal LUN. With a SP and PE, the storage container becomes visible to ESXi hosts, VVOL’s can be created in the storage container until it runs out of space. Also note that the storage container takes on the storage profile assigned to it which are inherited by the VVOLs in it. This is in place of presenting LUN’s to ESXi that you can then create VMFS data stores (or use as raw) and then carve storage to VMs.

Protocol endpoint (PE)

The PE provides visibility for the VMware hypervisor to see and access VMDK’s and other objects (e.g. .vmx, swap, etc) stored in VVOL’s. The protocol endpoint (PE) manages or directs I/O received from the VM enabling scaling across many virtual volumes leveraging multipathing of the PE (inherited by the VVOL’s.). Note that for storage I/O operations, the PE is simply a pass thru mechanism and does not store the VMDK or other contents. If using iSCSI, FC, FCoE or other SAN interface, then the PE works on a LUN basis (again not actually storing data), and if using NAS NFS, then with a mount point. Key point is that the PE gets out-of-the-way.

VVOL Poll

What are you VVOL plans, view results and cast your vote here

Wrap up (for now)

There certainly are many more details to VVOL’s. that you can get a preview of in the beta, a well as via various demos, webinars, VMworld sessions as more becomes public. However for now, hope you found this quick overview on VVOL’s. of use, since VVOL’s. at the time of this writing are not yet released, you will need to wait for more detailed info, or join the beta or poke around the web (for now). Also if you have not seen the first part overview to this piece, check it out here as I give some more links to get you started to learn more about VVOL’s.

Keep an eye on and learn more about VVOL’s. at VMworld 2014 as well as in various other venues.

IMHO VVOL’s. are or will be in your future, however the question will be is there going to be a back to the future moment for some of you with VVOL’s.?

What VVOL questions, comments and concerns are in your future and on your mind?

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

VMware VVOLs storage I/O fundementals (Part 1)

VMware VVOL’s storage I/O fundamentals (Part I)

Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g.VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).

Some of you may already be participating in the VMware beta of VVOL involving one of the initial storage vendors also in the beta program.

Ok, now let’s go a bit deeper, however if you want some good music to listen to while reading this, check out @BruceRave GoDeepMusic.Net and shows here.

Taking a step back, digging deeper into Storage I/O and VVOL’s fundamentals

Instead of a VM host accessing its virtual disk (aka VMDK) which is stored in a VMFS formatted data store (part of ESXi hypervisor) built on top of a SCSI LUN (e.g. SAS, SATA, iSCSI, Fibre Channel aka FC, FCoE aka FC over Ethernet, IBA/SRP, etc) or an NFS file system presented by a storage system (or appliance), VVOL’s push more functionality and visibility down into the storage system. VVOL’s shift more intelligence and work from the hypervisor down into the storage system. Instead of a storage system simply presenting a SCSI LUN or NFS mount point and having limited (coarse) to no visibility into how the underlying storage bits, bytes as well as blocks are being used, storage systems gain more awareness.

Keep in mind that even files and objects still get ultimately mapped to pages and blocks aka sectors even on nand flash-based SSD’s. However also keep an eye on some new technology such as the Seagate Kinetic drive that instead of responding to SCSI block based commands, leverage object API’s and associated software on servers. Read more about these emerging trends here and here at objectstoragecenter.com.

With a normal SCSI LUN the underlying storage system has no knowledge of how the upper level operating system, hypervisor, file system or application such as a database (doing raw IO) is allocating the pages or blocks of memory aka storage. It is up to the upper level storage and data management tools to map from objects and files to the corresponding extents, pages and logical block address (LBA) understood by the storage system. In the case of a NAS solution, there is a layer of abstractions placed over the underlying block storage handling file management and the associated file to LBA mapping activity.

Storage I/O basics
Storage I/O and IOP basics and addressing: LBA’s and LBN’s

Getting back to VVOL, instead of simply presenting a LUN which is essentially a linear range of LBA’s (think of a big table or array) that the hypervisor then manages data placement and access, the storage system now gains insight into what LBA’s correspond to various entities such as a VMDK or VMX, log, clone, swap or other VMware objects. With this more insight, storage systems can now do native and more granular functions such as clone, replication, snapshot among others as opposed to simply working on a coarse LUN basis. The similar concepts extend over to NAS NFS based access. Granted, there are more to VVOL’s including ability to get the underlying storage system more closely integrated with the virtual machine, hypervisor and associated management including supported service manage and classes or categories of service across performance, availability, capacity, economics.

What about VVOL, VAAI and VASA?

VVOL’s are building from earlier VMware initiatives including VAAI and VASA. With VAAI, VMware hypervisor’s can off-load common functions to storage systems that support features such as copy, clone, zero copy among others like how a computer can off-load graphics processing to a graphics card if present.

VASA however provides a means for visibility, insight and awareness between the hypervisor and its associated management (e.g. vCenter etc) as well as the storage system. This includes storage systems being able to communicate and publish to VMware its capabilities for storage space capacity, availability, performance and configuration among other things.

With VVOL’s VASA gets leveraged for unidirectional (e.g. two-way) communication where VMware hypervisor and management tools can tell the storage system of things, configuration, activities to do among others. Hence why VASA is important to have in your VMware CASA.

What’s this object storage stuff?

VVOL’s are a form of object storage access in that they differ from traditional block (LUN’s) and files (NAS volumes/mount points). However, keep in mind that not all object storage are the same as there are object storage access and architectures.

object storage
Object Storage basics, generalities and block file relationships

Avoid making the mistake of when you hear object storage that means ANSI T10 (the folks that manage the SCSI command specifications) Object Storage Device (OSD) or something else. There are many different types of underlying object storage architectures some with block and file as well as object access front ends. Likewise there are many different types of object access that sit on top of object architectures as well as traditional storage system.

Object storage I/O
An example of how some object storage gets accessed (not VMware specific)

Also keep in mind that there are many different types of object access mechanism including HTTP Rest based, S3 (e.g. a common industry defacto standard based on Amazon Simple Storage Service), SNIA CDMI, SOAP, Torrent, XAM, JSON, XML, DICOM, IL7 just to name a few, not to mention various programmatic bindings or application specific implementations and API’s. Read more about object storage architectures, access and related topics, themes and trends at www.objecstoragecenter.com

Lets take a break here and when you are ready, click here to read the third piece in this series VMware VVOL’s and storage I/O fundamentals Part 2.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Are VMware VVOLs in your virtual server and storage I/O future?

Are VMware VVOL’s in your virtual server and storage I/O future?

Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).

With VMworld 2014 just around the corner, for some of you the question is not if Virtual Volumes (VVOL’s) are in your future, rather when, where, how and with what.

What this means is that for some hands on beta testing is already occurring or will be soon, while for others that might be around the corner or down the road.

Some of you may already be participating in the VMware beta of VVOL involving one of the first storage vendors also in the beta program.

VMware vvol beta

On the other hand, some of you may not be in VMware centric environments and thus VVOL’s may not yet be in your vocabulary.

How do you know if VVOL are in your future if you don’t know what they are?

First, to be clear, as of the time this was written VMware VVOL’s are not released and only in beta as well as having been covered in earlier VMworld’s. Consequently what you are going to read here is based on what VVOL material has already been made public in various venues including earlier VMworld’s and VMware blogs among other places.

The quick synopsis of VMware VVOL’s overview:

  • Higher level of abstraction of storage vs. traditional SCSI LUN’s or NAS NFS mount points
  • Tighter level of integration and awareness between VMware hypervisors and storage systems
  • Simplified management for storage and virtualization administrators
  • Removing complexity to support increased scaling
  • Enable automation and service managed storage aka software defined storage management
  • VVOL considerations and your future

    As mentioned, as of this writing, VVOL’s are still a future item granted they exist in beta.

    For those of you in VMware environments, now is the time to add VVOL to your vocabulary which might mean simply taking the time to read a piece like this, or digging deeper into the theories of operations, configuration, usage, hints and tips, tutorials along with vendor specific implementations.

    Explore your options, and ask yourself, do you want VVOL or do you need it

    What support does your current vendor(s) have for VVOL or what is their statement of direction (SOD) which you might have to get from them under NDA.

    This means that there will be some first vendors with some of their products supporting VVOL’s with more vendors and products following (hence watch for many statements of direction announcements).

    Speaking of vendors, watch for a growing list of vendors to announce their current or plans for supporting VVOL’s, not to mention watch some of them jump up and down like Donkey in Shrek saying "oh oh pick me pick me".

    When you ask a vendor if they support VVOL’s, move beyond the simple yes or no, ask which of their specific products, it is a block (e.g. iSCSI) or NAS file (e.g. NFS) based and other caveats or configuration options.

    Watch for more information about VVOL’s in the weeks and months to come both from VMware along with from their storage provider partners.

    How will VVOL impact your organizations best practices, policies, workflow’s including who does what, along with associated responsibilities.

    Where to learn more

    Check out the companion piece to this that takes a closer look at storage I/O and VMware VVOL fundamentals here and here.

    Also check out this good VMware blog via Cormac Hogan (@CormacJHogan) that includes a video demo, granted its from 2012, however some of this stuff actually does take time and thus this is very timely. Speaking of VMware, Duncan Epping (aka @DuncanYB) at his Yellow-Bricks site has some good posts to check out as well with links to others including this here. Also check out the various VVOL related sessions at VMworld as well as the many existing, and soon to be many more blogs, articles and videos you can find via Google. And if you need a refresher, Why VASA is important to have in your VMware CASA.

    Of course keep an eye here or whichever venue you happen to read this for future follow-up and companion posts, and if you have not done so, sign up for the beta here as there are lots of good material including SDKs, configuration guides and more.

    VVOL Poll

    What are you VVOL plans, view results and cast your vote here

    Wrap up (for now)

    Hope you found this quick overview on VVOL’s of use, since VVOL’s at the time of this writing are not yet released, you will need to wait for more detailed info, or join the beta or poke around the web (for now).

    Keep an eye on and learn more about VVOL’s at VMworld 2014 as well as in various other venues.

    IMHO VVOL’s are or will be in your future, however the question will be is there going to be a back to the future moment for some of you with VVOL’s?

    Also what VVOL questions, comments and concerns are in your future and on your mind?

    And remember to check out the second part to this series here.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved