ROI From Use Of Global Control Plane For Expanding VDI Environments

ROI From Use Of Global Control Plane For Cloud VDI Environments

ROI From Use Of Global Control Plane For Expanding VDI Environments

The following is a new Industry Trends Perspective White Paper Report titled ROI From Use Of Global Control Plane For Expanding VDI Environments.

ROI From Use Of Global Control Plane For Expanding VDI Environments

This new StorageIO report looks at ROI From Use Of Global Control Plane For Expanding VDI environments. Using a Pro-Forma analysis this report provides a financial economic model comparison with Return on Investment (ROI) cost savings analysis for managing cloud based virtual desktop infrastructures (VDI) environments.

Cloud File Data Storage Consolidation and Economic Comparison Model

IT data infrastructure resource (servers, storage, I/O network, hardware, software, services) decision-making involves evaluating and comparing technical attributes (speeds, feeds, features) of a solution or service. Another aspect of data infrastructure resource decision-making involves assessing how a solution or service will support and enable a given application workload, along with associated management costs from a Performance, Availability, Capacity, and Economic (PACE) perspective.

Keep in mind that all application workloads have some amount of PACE resource requirements that may be high, low or various permutations, along with associated management costs. Performance, Availability (including data protection along with security) as well as Capacity are addressed via technical speeds, feeds, functionality along with workload suitability analysis.

Management costs are a function of initial and recurring tasks to support a given function or service such as VDI. The cost of management includes staff salary, along with amount of time needed to perform various tasks. The E in PACE resource decision-making is about the Economic analysis of various costs associated with different solution approaches.

ROI From Use Of Global Control Plane For Expanding VDI Environments

The above image is an example from the White Paper Report titled ROI From Use Of Global Control Plane For Expanding VDI Environments.

In the example shown above, 36 month OpEx cost (and time) savings are shown using traditional cloud based VDI management tools, technologies and techniques vs. a modern cloud platform integrated global control plane solution. Leveraging a cloud platform integrated global control plane solution such as NetApp VDS among others, management costs can be reduced for initial and recurring tasks from $2,587,394 to $968,041 for 1,001 users.

In addition to the cost savings shown above, note the reduction in management hours of 21,653 over 36 months which could be used for doing other work, or reducing your OpEx spend. Of course your savings will vary based on what tasks, time per task, admin cost among other considerations.

The shift from Capital Expenditures (e.g. CapEx) IT data infrastructure spending to Operational Expenditures (e.g. OpEx) focus particular with IT clouds has resulted in increased OpEx budget demands. Increased spending is more than simply moving IT spend from the CapEx to OpEx columns in budgets. OpEx increases are a cumulation of increased cloud services and data infrastructure spend, along with management (initial and recurring) costs.

The good news is that there are OpEx opportunities to reduce, or, stretch your IT budget to do more while boosting productivity, performance, and effectiveness without compromise. By looking at how to use new technologies in new ways, including leverage cloud platform integrated global control planes for management of VDI (and other functions), initial and recurring OpEx management costs can be reduced.

Read more in this Server StorageIO Industry Trends  Report here.

Where to learn more

Learn more about ROI From Use Of Global Control Plane For Expanding VDI Environments, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Application Data Value Characteristics Everything Is Not the Same
PACE your Infrastructure decision-making, it’s about application requirements
Cloud conversations: confidence, certainty, and confidentiality
Industry adoption vs. industry deployment, is there a difference?
Ten tips to reduce your cloud compute storage costs 
Don’t Stop Learning Expand Your Skills Experiences Everyday 
ToE NVMeoF TCP Performance Reduce Costs
Data Infrastructure Server Storage I/O Tradecraft Trends
Data Infrastructure Overview, Its What’s Inside of Data Centers
Data Infrastructure Management (Insight and Strategies)
Data Protection Diaries (Archive, Backup, BC, BR, DR, HA, Security)
NetApp VDS with Global Control Plane Cloud VDI Management

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

In addition, looking at your IT data infrastructure cloud spend can also help you to boost the effectiveness, productivity and return on investment while reducing your OpEx spend, or doing more with it. Leveraging financial pro-forma analysis as a tool in conjunction with your technology feature function, speeds, feeds comparisons enables informed decision making.

When comparing and making data infrastructure resource decisions, consider the application workload PACE characteristics. Shift or expand your focus from simply looking at costs from a efficiency utilization perspective to also include performance, productivity, and effectiveness of your IT OpEx spending.

Keep in mind that PACE means Performance (productivity), Availability (data protection), Capacity and Economics. This includes making decisions from a technical feature, functionality (speeds and feeds) capacity as well as how the solution supports your application workload. Leverage resources including tools to perform analysis including ROI From Use Of Global Control Plane For Expanding VDI Environments approaches.

Ok, nuff said, for now.

Cheers GS

Greg Schulz – Microsoft MVP Cloud and Data Center Management, previous 10 time VMware vExpert. Author of Software Defined Data Infrastructure Essentials (CRC Press), Data Infrastructure Management (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing my new book Data Infrastructure Management Insight Strategies published via Auerbach/CRC Press is now available via CRC Press and Amazon.com among other global venues.

My Fifth Solo Book Project – Data Infrastructure Management

Data Infrastructure Management Insight Strategies (e.g. the white book) is my fifth solo published book in addition to several other collaborative works. Given its title, the focus of this new book is around Data Infrastructures, the tools, technologies, techniques, trends including hardware, software, services, people, policies inside data centers that get defined to support business and application services delivery. The book (ISBN 9781138486423) is soft covered (also electronic kindle versions available) with 250 pages, over a 100 figures, tables, tips and examples. You can explore the contents via Google Books here.

Data Infrastructure Books by Greg Schulz
Stack of my solo books with common theme around Data Infrastructure topics

Data Infrastructure Management Book
Data Infrastructure Management – Insight and Strategies e.g. the White book (CRC Press 2019)

Some of My Other Books Include

Click on the following book images to learn more about, as well as order your copy.

Software Defined Data Infrastructure Essentials BookSNIA Recommended Reading List
Software Defined Data Infrastructure Essentials (SDDI) – Cloud, Converged, and Virtual Fundamental Server Storage I/O Tradecraft e.g. the Blue book covers software defined, sddc, sddi, hybrid, among other topics including serverless containers, NVMe, SSD, flash, pmem, scm as well as others. (CRC Press 2017) available at Amazon.com among other global venues.

Cloud and Virtual Data Storage Networking Intel recommended reading listIntel recommended reading list
Cloud and Virtual Data Storage Networking (CVDSN) – Your Journey to efficient and effective Information Services e.g. the Yellow or Gold Book (CRC Press 2011) available at Amazon.com among other global venues.

 

The Green and Virtual Data Center BookIntel Recommended Reading List
The Green and Virtual Data Center (TGVDC) – Enabling Efficient, Effective and Productive Data Infrastructures e.g. the Green Book (CRC Press 2009) available at Amazon.com among other venues.

Resilient Storage Networks Book
Resilient Storage Networks (RSN) – Designing Flexible scalable Data Infrastructures (Elsevier 2004) e.g. the Red Book is SNIA Education Endorsed Reading available at Amazon.com among other venues. I have some free copies of RSN for anybody who is willing to pay shipping and handling, send me a note and we will go from there.

Where to learn more

Learn more via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Today more than ever there tends to be a focus on the date something was created or published as there is a lot of temporal content with short shelf life. This means that there is a lot of content including books being created that are short temporal usually focused on a particular technology, tool, trend that has a life span or attention focus of a couple of years at best.

On the other hand, there is also content that is still being created today that combines new and emerging technology, tools, trends with time-tested strategies, techniques as well as processes, some of whose names or buzzwords will evolve. My books fit into the latter category of combing current as well as emerging technologies, tools, trends, techniques that support longer shelf life, just insert your new favorite buzzword, buzz trend or buzz topic as needed.

Data Infrastructure Books by Greg Schulz

You will also notice looking at the stack of books, Data Infrastructure Management Insight and Strategies is a smaller soft covered book compared to others in my collection. The reason is that this new book can be a quick read to address what you need, as well as be a companion to others in the stack depending on what your focus or requirements are.

Common questions I get having written several books, not to mention the thousands of articles, tips, reports, blogs, columns, white papers, videos, webinars among other content is what’s is next? Good question, see what’s next, as well as check out some other things I’m doing over at www.picturesoverstillwater.com where I’m generating big data that gets stored and processed in various data infrastructures including cloud ;) .

Will there be another book and if so on or about what? As I mentioned, there are some projects I’m exploring, will they get finished or take different directions, wait and see what’s next.

How do I find the time to create these books and how long does it take? The time required varies as does the amount of work, what else I’m doing. I try to leverage the book (and other content creation projects) with other things I’m doing to maximize time. Some book projects have been very fast, a year or less. Some take longer such as Software Defined Data Infrastructure Essentials as it is a big book with lots of material that will have a long shelf life.

Do I write and illustrate the books or do I have somebody do them for me? For my books I do the writing and illustrating (drawings, figures, images) myself along with some of the layouts relying on external copy editors and production folks.

What do I recommend or give advice to those wanting to write a book? Understand that publishing a book is a project, there’s the actual writing, editing, reviews, art work, research, labs or other support items as book companions. Also understand why are you writing a book, for fame, fortune, acclaim, to share with others or some other reason. I also recommend before you write your entire book to talk with others who have been published to test the waters, get feedback. You might find it easier to shop an extended outline than a completed manuscript, that is unless you are writing a novel or similar.

Want to learn more about writing a book (or other content), get feedback, have other questions, drop me a note and will do what I can to help out.

Data Infrastructure Management Book

There is an old saying, publish or perish, well, I just published my fifth solo book Data Infrastructure Management Insight Strategies that you can buy at Amazon.com among other venues.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2019. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2019 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars.

There is still time to register for the fall 2018 Dutch data infrastructure industry trends decision-making seminars November 27th and 28th. The workshops are being organized by Brouwer Storage Consultancy of Holland and will be held in Nijkerk.

On Tuesday, November 27th, there will be an advanced education workshop seminar covering data infrastructure industry trends and technology update presented by myself. On Wednesday, November 28th, there will be a deeper dive workgroup seminar session addressing data infrastructure related strategy, planning, and decision-making.

xxxx

Data Infrastructures Industry Trends November 27

Whats New, Whats the buzz, what you need to know about, From Speeds and Feeds, Slots and Watts to Whos doing what, from interesting to What’s relevant for your environment.

This one-day seminar is a new and improved version of the popular speeds and feeds session where we look at what’s new and emerging in the industry as well as applicable to your environments. You will be updated about the latest trends and emerging data infrastructure technologies to support digital transformation, little and big data analytics, AI/ML/DL, GDPR, data protection, edge/fog compute, and IoT among others. From legacy to the software-defined cloud, container converged and virtual to composable. The seminar is a mix of presentation and engaging discussion as we look into details of favorite or new technologies for both those who are old-school, new-school and current or future school.

Part I – Industry Trends, Applications, and Workload
Part II – Server Compute, Memory, I/O, hardware and software
Part III – Storage and Data protection for on-prem and cloud
Part IV – Bringing it all together, managing and decision making

Topics to be covered include among others:

  • What these trends, tools, technologies mean for different environments of various size.
  • Tips on evaluating legacy and startup or newer vendors as well as technologies.
  • Updates on vendors, services, technologies, products you may or may not have heard of.
  • Cloud (public/private/multi-cloud/hybrid) compute, storage and management.
  • Containers (including docker, windows, kubernetes, FaaS, serverless, lambda).
  • Converged and hyper-converged; Gen-Z and composable; NVMe and NVMeoF.
  • Persistent Memory (PMEM), Storage Class Memory (SCM), 3D XPoint, NAND Flash SSD.
  • Legacy vs. software-defined, appliances, storage systems, block, NAS file, object, table.
  • Bulk cloud data migration appliances, storage for the edge, file sync and share.
  • Role and importance of context (what’s applicable, what something means).
  • Who’s doing what, what to look for today for the future.

This seminar is for those involved with ICT/IT servers, storage, storage, I/O networking, and associated management activities including data protection, of legacy, as well as software-defined cloud, containers, converged hyper-converged and virtualization. This seminar is for professionals who manage, architect or are otherwise involved with data infrastructure related topic strategy and acquisitions.

Data Infrastructures Deep Dive Decision Making November 28

Enabling Informed Strategy and Decision Making, moving from what are the tools, trends and technologies evolving to what to use, when, where, why, how, along with strategy, planning, decision-making, and ongoing management.

If the answer is a cloud, converged, container, composable, edge, fog, digital transformation, on-prem, hybrid, software-defined, what were or are the questions to plan as well as prepare for deployment today, along with in the future? This workshop format seminar provides answers to fundamental questions, with essential insight into software-defined data infrastructures (SDDI) and software-defined data centers (SDDC). For ICT/IT professionals (architects, strategists, administrators, managers) currently or planning on being involved with servers, storage, I/O networking, hardware, software, converged, containers, cloud backup/data protection, and associated topics, this seminar is for you.

Clouds converged, and containers will be a primary focus along with related themes and topics that you need to know more about. Don’t be scared of clouds, be prepared, and this includes for on-prem, public, hybrid and multi-cloud. As part of our deeper dive decision-making strategy focus, we look at cloud cost considerations including are you paying too much or not enough (e.g., are you depriving your applications of performance to save money?). We will explore various decision-making and strategy topics spanning AWS, Microsoft Azure, Azure Stack, Windows and Hyper-V, VMware (including on AWS) and OpenStack, is it still open for business?

Additional topics, trends, themes include:

  • Everything is not the same across cloud services, converged, or containers.
  • Different environments have various data infrastructure resource needs.
  • How to balance legacy on-prem application needs with emerging technology options.
  • Different comparison criteria for smaller environments remote office vs. Larger enterprise
  • Do it yourself (DiY) vs. Turnkey software vs. Bundled tin wrapped software solution
  • Strategy, planning, decision-making, and ongoing management

How To Register For Seminar Workshops

Learn more about fall 2018 Dutch Server StorageIO Data Infrastructure Tuesday trends workshop seminar here (PDF), and Wednesday deeper dive decision-making workshop session here (PDF).

To register and obtain more information, contact event organizers Brouwer Storage consultancy at +31-33-246-6825 or +31-652-601-309 and info at brouwerconsultancy.com.

Where to learn more

Learn more about Data Infrastructure and related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Everything is not the same across different organizations, environments, application workloads, data, technology, tools, trends. These two one day interactive workshop seminars provide timely insight into what’s going on in the data infrastructure related industry, along with common IT organization challenges as well as how to address them. Moving from the what to what to use when, where, why, how along with alternatives, gaining insight and awareness to avoid flying blind enables effective strategy, decision-making, planning and ongoing management. Learn more and sign up for Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars, see you in Nijkerk.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Azure Data Box Disk Impressions #blogtobertech

Microsoft Azure Data Box Disk Test Drive Impressions #blogtobertech

Microsoft Azure Data Box Disk Test Drive Impressions #blogtobertech

Data Box Disk Test Drive Impressions is the last of a four-post series looking at Microsoft Azure Data Box. View Part 1 Microsoft announced Azure Data Box updates, Part 2 Microsoft Azure Data Box Family, and Part 3 Microsoft Azure Data Box Disk Test Drive Review.

Overall, I liked the Azure Data Box experience along with a range of options to select the best fit solution for my needs. A common trend among the major cloud service providers such as AWS, Microsoft Azure and Google is that one size fits all approach solution does not meet different customer needs.

The only things that I did not like about and would like to see improved with Azure Data Box are two items one at the beginning, the other at the end of the process. Granted with Data Box Disks still in preview, there is time for those items to be addressed before general availability, and I have passed on the feedback to Microsoft.

At the beginning of the process, things are pretty straightforward with good tools along with resources to help you navigate which type of Data Box to order, how to order, specify your account details and other information.

What I did not like with the up front experience was after the quick ordering and notification process, the time delay of a week or more until notified when a Data Box would be arriving. Granted I was not in a rush and Microsoft did indicate that it could take about ten days to be informed of availability, this is something that should be done quickly as resources become available. Another option is for Microsoft to add an ordering option for priority or low-priority in the future.

The other experience that I did not like was at the very end, in that perhaps its stuck in an email spam trap (checked, could not find it), the final notification could be better. Not only a final email note saying your data is copied, but also a reminder of where your block or page blobs were copied to (e.g., what your setup when ordering).

Monitoring the progress of the process, I knew when Data Box drives arrived at Microsoft, copy started and completed including with error status. Having gotten used to receiving update notifications from Azure, not receiving one at the end saying congratulations your data has been copied, check here for any errors or other info, as well as a reminder where the data was copied to would be useful.

Likewise, a follow-up note from Microsoft saying that the Azure Data Box drives used as part of the transfer were securely erased along with a certificate of digital destruction would be useful for compliance purposes.

As mentioned above, overall, I found the Data Box Disk experience very positive and a great way to move bulk data faster than what could be done with available networks. My next step is now to migrate some of the transferred data to cold long-term archive storage, and some others to Azure Files, with some staying in block blobs. There are also a couple of VHD and VHDX that will be moved and attached to VMs for additional testing.

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

For those who have a need to move large amounts of data including structured, unstructured, semi-structured, little or big data to a cloud resource, solutions such as Azure Data Box may be in your future. Likewise, for those looking to support remote and edge workloads from AI, ML, DL inferencing, to large-scale data pre-processing, data collection and acquisition, video, telemetry, IoT among others Data Box type solutions may be in your future. Overall I found Microsoft Azure Data Box Disk Impressions Favorable and was able to address a project I had on the to-do list for some time.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Azure Data Box Disk Test Drive Review #blogtobertech

Microsoft Azure Data Box Test Drive #blogtobertech

Microsoft Azure Data Box Test Drive #blogtobertech

Microsoft Azure Data Box Test Drive is part three of four series looking at Data Box. View Part 1 Microsoft announced Azure Data Box updatesPart 2 Microsoft Azure Data Box Family, and Part 4 Microsoft Azure Data Box Disk Impressions.

Getting Started

The workflow for using Data Box involves selecting with the type of Data Box to use via the Microsoft Azure portal (here), or Data Box Family page (here).

Getting Started via the Microsoft Azure Data Box Family Page image via Microsoft.com
Getting Started via the Microsoft Azure Data Box Family Page image via Microsoft.com

First step of ordering a Data Box is to specify your Azure subscription, type of operation (e.g., import data into Azure, or export out), source country/region and destination Azure region.

Selecting Data Box from Azure Portal
Selecting Data Box from Azure Portal

The next step is to determine what type of Data Box, in this test I choose 40 TB Data Box Disks. Make a note of fees to avoid any surprises.

Selecting Data Box Disks (40 TB) From Azure Portal
Selecting Data Box Disks (40 TB) From Azure Portal

After selecting the type of Data Box, fill in storage account information using an existing resource, or create new ones as needed. Make a note of these selections as you will need them after the copy is done as this is where your data will be located.

Specify Azure Storage Account Information Where Data Will Transfer To
Specify Azure Storage Account Information Where Data Will Transfer To

Once the order is placed, an email is received confirming the order and also being a preview, indicating that it might take ten days to hear a status update or availability of the devices.

Email notification received after the order is placed
Email notification received after the order is placed

After about ten days, I was contacted by Microsoft via an email (not shown) confirming the amount of data to be copied to determine how many disks would be needed. Once this was confirmed with Microsoft, a status update was noted on the Azure dashboard.

Azure Data Box Dashboard Status after order placed
Azure Data Box Dashboard Status after order placed

After a few days, a box arrived with the Data Box disks, cables and return shipping labels enclosed. Also received was an email notification indicating the disks had arrived.

Email notice Data Box has arrived on site
Email notice Data Box has arrived on site (on-prem if you prefer)

The following is the physical box that contains the Data Box disks that I received from Microsoft.

The shipping box with Data Box Disks arrives
The shipping box with Data Box Disks arrives

Once you get the Data Box, go to the Azure portal for Data Box and access the tools. There are tools and commands for Windows as well as Linux that are needed for accessing and unlocking the disks. This is where you also obtain device IDs. You will also need to have the access key phrase you specified in an earlier step as part of placing the order.

Access Data Box Software Tools and Keys from Azure Portal
Access Data Box Software Tools and Keys from Azure Portal

Inside the shipping box was a pair of 8 TB SATA SSDs, SATA to USB cables, along with return shipping labels.

Contents inside the shipping box, two Data Box 8 TB disks
Contents inside the shipping box, two Data Box 8 TB disks

From the Azure portal, access the device IDs that will be needed along with passphrase for obtaining and unlocking the Data Box disks. You will also want to download the tools as well as follow other instructions on the portal for accessing disks.

Azure Data Box tools, device IDs and Keys
Azure Data Box tools, device IDs and Keys

The Windows system I used for testing is a virtual machine hosted on a VMware vSphere ESXi 6.7 host. After physically attaching the Data Box Disks to the VM host, a virtual or software attachment was done by adding USB devices to the VM.

Virtual Attach of Data Box Disks to VMware vSphere ESXi host and guest VM
Virtual Attach of Data Box Disks to VMware vSphere ESXi host and guest VM

Once the VM had the Data Box disks attached and mapped, they appeared to Windows. After downloading the Data Box software tools and unlocking the devices, they were ready to copy data to. Note that the disks appear as a regular Windows device once unlocked. Simply using bit locker does not unlock the drives, you need to use the Data Box tools. Speaking of Windows disks, there are a couple of folders on the Data Box disk when shipped including one for Block Blob and Page Blob along with verification items.

View of Data Box Disks (8 TB each) after attaching to Windows system
View of Data Box Disks (8 TB each) after attaching to Windows system

Note that you are given several days as part of the base transfer cost, then extra days apply. Since I had a few extra days, I used some of the excess capacity to do some staging and reorganization of data before the actual copy.

Data copy is done using your choice of tools, for example, Robocopy among many others. I used a combination of Robocopy, Retrospect among others. Also, note that for most data place them in the folder or directory structure of your choice in the Block Blob folder. Page Blobs are for VHDX to be used with virtual machines on Azure. After spending a few days to copy the data I wanted to move along with performing verification, it was time to pack up the devices.

As a reminder, blobs are analogous to and what Microsoft Azure refers to instead of objects (e.g., object storage). Also remember that Azure blobs include block, page (512-byte page aligned for VHDX) and append (similar to other vendors object storage). Microsoft Azure in addition to blobs, supports file (SMB and NFS) access, along with table (database) and queue storage services.

The following shows the return label attached to the shipping box that contains the Data Box disks and cables. I also included a copy of the shipping label inside the box just in case something happened during shipment. Once prepared for delivery, I took the box to a local UPS store where I received a shipment receipt (not shown). Later that day I also received an email from Microsoft indicating the shipment was in-progress.

Data Box disks packaged with return receipt (was in the box)
Data Box disks packaged with return receipt (was in the box)

The Azure portal shows status of Data Box shipment being sent to Microsoft, along with a follow-up email notification.

Azure Data Box portal status
Azure Data Box portal status

Email notification of Data Box on the way to Microsoft.

Notice data box is on the way to Azure
Notice data box is on the way to Azure

After a few days’ ways, checking the Azure Portal shows the Data Box arrived at Microsoft and copied operations underway. Remember the storage account you specified back in the early steps is where you will look for your data. This is something I think Microsoft can improve on by providing a link, or some reminder of where the data is being copied to in the status. Likewise, a copy completion email notice would be handy after getting used to the other alerts previous in the process.

Azure Data Box portal showing disk copy operation status
Azure Data Box portal showing disk copy operation status

Looking at the Azure storage account specified during the ordering process in the Blob storage resources the contents of the Data Box Disks can be found.

Contents of Data Box disks copied into specified Azure Blobs and storage account
Contents of Data Box disks copied into specified Azure Blobs and storage account

The following shows folders that I had copied from on-prem systems to the Data Box now located in the proper Azure Block Blobs. Not shown are Page blobs where I moved some VHDXs.

xMission accomplished, data folders now stored in Azure block blobs
Mission accomplished, data folders now stored in Azure block blobs

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Overall the test drive of the Azure Data Box Disk solution was positive, and look forward to trying out some of the other Data Box solutions, both offline and online options in the future. Continue reading Part 4 Microsoft Azure Data Box Disk Impressions as part of this series including Microsoft Azure Data Box Disk Test Drive Review.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family is part two of a four-part series looking at Data Box. View Part 1 Microsoft announced Azure Data Box updates, Part 3 Microsoft Azure Data Box Disk Test Drive Review, Part 4 Microsoft Azure Data Box Disk Impressions.

Microsoft Azure Data Box Overview

Microsoft has several Data Box solutions available or in the preview to meet various customer needs. These include both online as well as offline solutions that include hardware (except Data Box Gateway), software tools and cloud services.

Data Box Online

Microsoft has two online Data Box offerings that provide real-time access of Azure cloud storage resources from on-prem including remote, edge locations. The online Data Box solutions include Edge and Gateway both with local on-prem storage.


Data Box Edge image via Microsoft.com

Data Box Edge (Preview)

Currently, in preview, Data Box Edge is a 1U appliance that combines hardware along with software resources for deployment on-prem at the edge or remote locations. Data Box Edge places locally converged compute and storage resources as an appliance along with connectivity to Azure cloud-based resources.

Intended workloads and applications for Data Box Edge include remote AI, ML, and DL inferencing, data processing or pre-processing before sending to Azure Cloud, function as an edge compute, data protection and data transfer platform (e.g., cloud storage gateway) with local compute. Data Box Edge is similar in functionality and focuses on other cloud service provider solutions such as AWS Snow Ball Edge (SBE). Management tools include Data Box Edge resource Azure portal for management from a web UI, create and manage resources, devices, shares.

Other Data Box Edge attributes include:

  • Supports Azure Blob or Files via SMB and NFS storage access protocols
  • Dual Intel Xeon processors each with 10 CPU cores, 64GB RAM
  • 2 x 10 Gbps SFP+ copper cables, 2 x 1 Gbps RJ45 cables
  • 8 NVMe SSD (1.6 TB each), no HA, 12.8 TB total raw cap
  • 2 x 1 GbE (one for management, one for user access)
  • 2 x 25 GbE (can operate at 10 GbE) and 2 x 25 GbE ports
  • Local web UI for management and configuration

Data Box Gateway (Preview)

Also in Preview, Data Box Gateway is a virtual machine (VM) based software defined appliance that runs on VMware vSphere (ESXi) or Microsoft Hyper-V hypervisors. The functionality of Data Box Gateway is that of a cloud storage gateway providing access to Azure Blob (Page and Block) or Files (NAS) via SMB or NFS protocols. Learn more about both Data Box Edge and Data Box Gateway here including pricing here.

Data Box Offline Solutions

Microsoft has several offline Data Box offerings including previously available and new in preview models. Offline Data Box solutions enable large amounts of data to be moved from on-prem primary, remote and edge locations to Azure cloud storage resources. Bulk data movement operations can be one-time or recurring in support of big data migration of energy, research, media & entertainment and other large volumes of data.

Other bulk movement includes for archive, backup, BC/DR, virtual machine and application migration among others. Use Data Box Offline solutions when large amounts of data need to be moved from on-prem to Azure cloud faster than what available networks will support promptly.

Offline Data Box solutions include:

  • Data Box Heavy (Preview) 1 PB Storage, 800 TB usable
  • Data Box 100 TB (80 TB usable)
  • Data Box Disk (Preview) 40 TB (35 TB Usable)


Data Box Heavy 1 PB (Preview) image via Microsoft.com

Data Box Heavy 1 PB (Preview)

  • Appliance with Up to 800 TB usable capacity per order
  • One system per order
  • Supports Azure Blob or Files
  • Copy data to up to 10 storage accounts
  • 1 x 1/10 Gbps RJ45 connector, 4 x 40 Gbps QSFP+ connectors
  • AES 256-bit encryption
  • Copies data using NAS SMB and NFS protocols


Data Box 100TB image via Microsoft.com

100 TB Data Box

  • An appliance that supports 80 TB usable storage capacity
  • Supports Azure Blob or Files
  • Copies data to 10 storage accounts
  • 1 x 1/10 GbE RJ45 connector
  • 2 x 10 GbE SFP+ connector
  • AES 256-bit encryption
  • Storage access and copy via SMB and NFS NAS protocols

Case of Data Box Disks image via Microsoft.com

Data Box Disk 40 TB (Preview)

  • Up to 35 TB usable capacity per order
  • Up to 5 SSDs per order
  • This is what I tested (2 x 8 TB)
  • Supports Azure Blob storage (Block and Page)
  • Copies data to a single storage account
  • USB/SATA II, III server I/O interface (comes with SATA to USB connector cables)
  • AES 128-bit encryption
  • Copy data with standard tools

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Which Microsoft Azure Data Box is the best? That depends on your needs and requirements.

Microsoft along with other major cloud service providers continue to evolve their data migration services. Realizing that customers who need, want, or have to get data to the cloud also need to remove barriers, solutions such as Azure Data Box are a step in eliminating cloud barriers while addressing cloud concerns. Continue reading Part 3 Microsoft Azure Data Box Disk Test Drive Review and Part 4 Microsoft Azure Data Box Disk Impressions as part of Microsoft Azure Data Box Family.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft announced Azure Data Box updates #blogtobertech

Microsoft announced Azure Data Box updates – #blogtobertech

Microsoft announced Azure Data Box updates - #blogtobertech

Microsoft announced Azure Data Box is the first in a series of four posts looking at Data Box including a test drive experience. View Part 2 Microsoft Azure Data Box Family, Part 3 Microsoft Azure Data Box Disk Test Drive Review, Part 4 Microsoft Azure Data Box Disk Impressions.

Microsoft Azure Data Box Family Page image via Microsoft.com
Microsoft Azure Data Box Family Page image via Microsoft.com

At Ignite in Microsoft announced Azure Data Box updates, which means its time for a test drive and review. Microsoft has several Data Box solutions available or in the preview to meet various customer needs. These include both online as well as offline solutions that include hardware (except Data Box Gateway), software tools and cloud services. In general, Data Box enables bulk movement and migration of data from on-prem environments to Azure cloud storage including blobs (e.g., objects) and files (e.g., NAS accessible) resources.

Whats The Need for Data Movement Appliance Service

Some might ask the question why do you need a Microsoft Azure Data Box when there are fast networks? Good question, assuming you have fast networks that can move large amounts of bulk data promptly. Microsoft supports traditional Internet-based access to Azure cloud resources for data migration, along with higher speed Express Route service similar to Amazon Web Service (AWS) Direct Connect among other options.

On the other hand, if you need to move a large amount of data that would take weeks, months or longer sending over expensive networks, then solutions like Data Box are an option. Microsoft is not alone or unique having data storage migration or movement services. AWS has Snowball, Snowball Edge with compute, as well as the truck size Snowmobile for large-scale data movement. Google also has their Transfer services including Google Transfer Appliance.

Who is Azure Data Box for?

Azure Data Box is for those who need to migrate data to Azure cloud storage and other services on a one-time or recurring basis. Another scenario is for those who need to have on-prem storage and optional compute at remote or edge locations in support of data acquisition, media & entertainment, energy exploration, AI, ML, DL inferencing, local data processing, pre-processing before sending to cloud among other workloads.

Yet other scenarios for those who need to move large amounts of data online, off-line, or in disconnected also known as submarine mode where a connection to the internet is not always available. Bulk data movement also applies for one-time, as well as recurring data protection such as archive, backups, BC/DR, as well as data shipping, virtual machine farm relocation, SQL Server data migration to cloud, data center consolidation among many other scenarios.

What is Azure Data Box

Azure Data Box is a combination of hardware, software, cloud services that support data migration (on-line and off-line) from on-prem environments including remote or edge to Azure cloud storage resources. There are different Data Box solutions available or in the preview to meet various needs from performance, capacity, functionality, without as well as without compute. In addition to being used for data migration, there are also Data Box solutions (e.g., Edge) that converge compute and storage for deployment at remote or edge locations.

Data Box Gateway is a software-defined virtual machine appliance that deploys on VMware and Microsoft (e.g., Hyper-V) hypervisors. Off-line Data Box solutions scale from single 8TB SSD disks to PB of capacity with various functionality.

As a reminder, blobs are analogous to and what Microsoft Azure refers to instead of objects (e.g., object storage). Also remember that Azure blobs include block, page (512-byte page aligned for VHDX) and append (similar to other vendors object storage). Microsoft Azure in addition to blobs, supports file (SMB and NFS) access, along with table (database) and queue storage services.

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Azure Data Box type solutions and services are becoming more common as well as diverse. With the addition of compute in some of these solutions to support remote edge workloads, the lines may blur with some of the converged and hyper-converged infrastructure (HCI) solutions. Likewise, keep an eye to see how cloud service providers leverage solutions like Data Box Edge to further place their reach out to the edge enabling fog (e.g., cloud at the edge) among other converged functionality. Continue reading Part 2 Microsoft Azure Data Box Family, Part 3 Microsoft Azure Data Box Disk Test Drive Review, and Part 4 Microsoft Azure Data Box Disk Impressions as part of Microsoft announced Azure Data Box updates.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

AWS Snowball Edge SBE Converged Cloud Storage Appliance

AWS Snowball Edge SBE Converged Cloud Storage Appliance

AWS Snowball Edge SBE Converged Cloud Storage Appliance

As part of extending their cloud platform reach, recent Amazon Web Services (AWS) announcements include AWS Snowball Edge SBE Converged Cloud Storage Appliance. Snowball Edge (SBE) has evolved from its previous focus as a data transfer, migration platform appliance to now include support for on-prem compute. SBE has previously been available as an appliance that ships from AWS to your location as a service to enable bulk data movement to the public cloud (e.g. AWS Simple Storage Service (S3) bucket). With this new capability, AWS is enabling SBE to support on-prem compute similar to Elastic Cloud Compute (EC2) cloud instances.

AWS Snowball Data Migration at PB scale
AWS Snowball Appliance Image via AWS.com

What is AWS Snowball

Snowball is a bulk physical data migration appliance that AWS ships to your location. You use Snowball by setting up a copy job with AWS, when the device arrives at your site, set it up, and enable the copy jobs to occur moving data from source to Snowball destination. Once data is copied, you ship the Snowball back to an AWS region and availability zone (AZ) where its contents are copied into a Simple Storage Service (S3) bucket of your choice. Once the copy job into an AWS S3 is complete, AWS performs a secure erase of the Snowball.

Basic Snowball includes 10 GbE network connections (RJ45 and SFP+ [fiber or copper]). Security and Encryption includes 256-bit keys that can be managed via AWS Key Management Service (KMS). Note that keys are not sent to or stored on the device for security during transit. For additional protection, tamper-resistant seals are included along with the Trusted Platform Module (TPM) to detect unauthorized hardware, firmware or software changes.

End to End tracking is enabled using E ink shipping labels and allow monitoring via AWS Simple Notification Service (SNS). Once your data transfer job completes along with verified, a software erasure of the SBE is performed by AWS following NIST media handling guidelines.

For management, SBE has an API for customer integration, as well as the ability to create and manage transfer jobs via the AWS management console. SBE Adapter also gives customers direct access to Snowball where it appears as an S3 endpoint (how you access the storage and data).

Backside view of AWS Snowball
Backside view of Snowball Image via Amazon.com

Additional Snowball Speeds and Specification Feature Feeds include:

  • Storage space capacity of 50TB (42TB usable) or 80TB (72TB usable)
  • Network connectivity 10 GbE RJ45 (Cat6), SFP+ (Copper and Optical). Cables include RJ45 and Copper SFP+. For Fiber attached Ethernet, the customer supplies their own SFP+ optical cables.
  • SBE is designed for office environments, as well as data centers (e.g., about 68db) and weigh about 47 pounds.
  • Power requirements include NEMA 5-15p (standard wall outlet) 100-200 volts with power cable included.

Note for traditional Snowball deployments an on-prem workstation or server is needed to copy data from source locations to the Snowball device.

How AWS Snowball and Snowball Edge work

How AWS Snowball Works

Referring to the image above, first step to using AWS Snowball (or Snowball Edge) is to place an order via AWS management console (A). Part of the ordering process involves setting up the data transfer job, and in the case of AWS Snowball Edge, defining the EC2 instance and image (read more about that here via AWS). After placing order and setup, the AWS Snowball arrives at your location (B), on-site setup is done and data transfer performed (C). Once data is transferred, the AWS Snowball is returned to designated AWS location via two day shipping (D) and data copied into your specified S3 or Glacier bucket (E). After your data is transferred into the S3 or Glacier bucket you specify as part of the transfer job, you are able to do what you want with your files, folders, images, videos, VHDX’s, VMDK’s, ISO, little data, big data.

What is AWS Snowball Edge

AWS has enhanced its Snowball Edge (SBE) data mobility, migration, and transport appliance to now also include compute. For those not familiar, Snowball is an appliance that comes in various sizes that you order from AWS, it shows up at your site, and then you copy your data to it for migration into AWS. Once data is copied, you return to AWS where the data then appears in your designated S3 bucket. From your S3 bucket, you can then move the data, files, volumes, images to other locations, use for standing up EC2 compute, populating databases or other items.

With the new compute feature, AWS is enabling compute on the snowball edge appliance functioning similar to EC2 instance, except that they are on your site. This means you can use the compute to run your own custom AMI’s (Amazon Machine Image) on site or on-prem in support of data migration, conversion or another process. You can also keep the appliance on-site for as long as you want, granted your credit card gets charged to support development, test, extended migration, or to have a converged, or, hyper-converged platform.

Note that with SBE having compute capability, you can now run an EC2 image that functions as your copy server eliminating the need to have a workstation or server on-prem for the copy operation.

Additional AWS Snowball Edge Speeds and feature function feeds include:

  • 100TB (82TB usable) storage space capacity
  • 10 GbE network, along with 10/25 GbE SFP28 and 40 GbE QSFP+ with device-based encryption (customer provided network cables)
  • Local computing with EC2 and Lambda functions for remote deployment along with scale-out clustering of multiple SBE’s
  • S3 compatible endpoint along with NFS endpoint (mount point) using both NFS v3 and v4.1.
  • Weighs about 50 pounds, tamper evident seals along with TPM similar to traditional Snowball along with detection of hardware, firmware or software changes.
  • Can exist in an office environment, or data center.
  • Power cables are included, NEMA 5-15p, 100-220 volts, 400 watts.

What is AWS Snowmobile

Need something with more capacity than an SBE? AWS has a more extensive version called Snowmobile that supports up to 100PB that is brought to your site via a 45-foot-long tractor-trailer truck. Both SBE and Snowmobile physically move data from your location to an AWS region availability zone (AZ) aka data center where it is placed into the Simple Storage Service (S3) or Glacier bucket of your choice. Once in the S3 or Glacier bucket, you can move the data to where ever you need it.

Why Snowball Edge and Snowmobile vs. Fast Networks

Some people ask why the need for services such as SBE and Snowmobile, or, physically shipping your SSDs, HDD’s, tape or other storage media to a cloud provider in the Internet era of fast networks. The reason can be quite simple; most environments do not have internet connection speeds of 10 GbE or higher that can be dedicated outside of regular use for data movement at scale.

Likewise, some public cloud service providers have limitations on the network speed of their front-end general-purpose Internet access.

Note that some such as AWS have high-speed, low latency direct connect services from partner staging locations. However, those too may be limited in speed for large bulk transfers. AWS also has other performance-enhanced services for general Internet access including S3 Transfer Acceleration. Note that Microsoft Azure has special connectivity options such as ExpressRoute, while Google Compute Platform (GCP) has Cloud Interconnect.

Is AWS SBE and CI, HCI, CiB or Appliance?

The answer to the question of if SBE is a Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI), Cloud in a Box (CiB) or Cloud Appliance depends on your view and definition of those deployment models. Some will argue that SBE is a CI or HCI as well as CiB based on what Cisco, Dell Technologies, HPE, Microsoft (Azure Stack and Windows S2D), NetApp, Nutanix, Pivot3 and VMware vSAN among others offerings.

On the other hand, some will argue that SBE is not the same as the above and others give it does not meet the definition of their CI, HCI, CiB or cloud appliance. What is important is not if CI, HCI, CiB or appliance, rather, what it can do, how it can adapt to your environment and work for you vs. you work for it. In other words, what is important is the enablement a solution provides vs. if it is CI, HCI, CIB or something else. Meanwhile watch to see who ignores SBE, who welcomes it to their market space, and who throws mud balls and fud balls at snowball.

When to use Snowball vs. Snowball Edge

If all you need is bulk data migration appliance using one of your servers or workstations for smaller amounts of data, traditional Snowball is a good fit. On the other hand, if you need to move more data, leverage SBE enabled on-prem compute with EC2 and Lambda functionality for short, or long-term duration, as well as scale-out to create a cluster, then SBE is for you. SBE is also a good fit for environments that need short-term, as well as the longer-term deployment of compute, storage and network (e.g., converged). For example, factory environments, rugged implementations on ships, energy exploration and processing, traveling venues and sporting events, distributed environments being consolidated among others.

AWS Regions, AZ locations
AWS Regions and AZ’s image Via AWS.com

What About AWS Snowball Edge Pricing

Pricing varies based on AWS region you are using for your transfer and management from. Another variable is if you are selecting data transfer only, or, enabling EC2 compute instance on-prem. Yet another pricing variable is how long you will keep the Snowball Edge on-prem. You are given ten (10) free days as part of your data transfer job along with days for shipping and return.

Beyond the ten free days, you will pay a daily rate that varies. The longer you keep the SBE on-prem, and for example commit to a one or three-year pre-pay, you will receive larger discounts. Also note that there are no data transfer fees for moving data into AWS. However, standard pricing applies once stored into AWS, or moved. Also note that standard AWS storage charges (e.g. S3, Glacier, along with API calls apply once data is stored).

As an example, data transfer only, the service fee for a data transfer job is USD 300 for the US and another non-Asia-Pacific (Singapore). Additional days are $30 each.

Another example is selecting data transfer plus EC2 compute instance which varies by region example is $500 for transfer job (US East Northern Virginia or Ohio), $50 a day extra fee. However, if you are will to pay up front for one year, the day fee drops to $42 (varies by region), and to $35 a day for a three commitment.

For some environments, it may cost less to buy a server with storage, set it up and manage, while for others, the simplicity of a turnkey converged platform may be more cost-effective along with better value. Learn more about AWS Snowball Edge pricing here.

Where to learn more

Learn more about AWS, Snowball Edge, Cloud and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Has AWS embraced hybrid public cloud and on-prem computing? IMHO while AWS is making it easier for environments to use, access as well as move to public cloud, they are still focused on the public cloud as the destination. In other words, AWS is making it easy to move your data and applications to their services as well as access them with AWS Snowball Edge SBE Converged Cloud Storage Appliance.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

May 2018 Server StorageIO Data Infrastructure Update Newsletter

May 2018 Server StorageIO Data Infrastructure Update Newsletter

May 2018 Server StorageIO Data Infrastructure Update Newsletter

Volume 18, Issue 5 (May 2018)

Hello and welcome to the May 2018 Server StorageIO Data Infrastructure Update Newsletter.

In cased you missed it, the April 2018 Server StorageIO Data Infrastructure Update Newsletter can be viewed here (HTML and PDF).

May has been a busy month with a lot of data infrastructure related activity from software-defined virtual, cloud, container, converged, serverless to legacy, hardware, software, services, server, storage, I/O and networking along with data protection topics among others.

In this issue buzzwords topics include GDPR, NVMe, NVMeoF, Composable, Serverless, Data Protection, SCM, Gen-Z, MaaS:

Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

May has been a busy month, some data infrastructure, server, storage, I/O network, hardware, software, cloud, converged, and container as well as data protection activity includes among others:

Depending on when you read this, the new global data protection regulations (GDPR) are either days away, or already in effect. For those who are not aware of GDPR other than seeing many inbox items in your email pertaining to it, here are some resources as a refresher or primer:

May Buzzword, Buzz Topic and Trends

Besides data protection and GDPR, other recent data infrastructure related news, trends, technologies and topics to keep an eye on (besides AI, ML, DL, AR/VR, IoT, Blockchain, Serverless) include Metal as a Service (MaaS) that might be familiar to some, for others, something new. Canonical has been busy for sometime now with MaaS including in Ubuntu and they are not alone with variations appearing with various managed service providers, hosting and cloud providers as well. NVMe has become a more common topic, technology, trend including for use in servers as well as over fabrics (e.g. NVMe over Fabrics) as a language for server, storage, I/O communication.

A new emerging companion to NVMe is Gen-Z which initially is a companion to PCIe. Longer term, Gen-Z could maybe possibly be a replacement, as well as for use accessing direct random access memory (DRAM) among other uses. Storage Class Memory (SCM) has been an industry conversation topic for several years now with new persistent memories (PMEM) that combine the best of traditional DRAM (Speed and write endurance) as well as persistent, higher capacity, lower cost of traditional NAND flash SSDs.

Another trend topic is that for some, ASIC, FPGA and GPU are new companions to standard commodity compute processors along with servers, yet for others it may be Dejavu as they have been being used for years (ok, decades) in some solutions. For now, two other buzzwords, buzz terms to add or refresh your data infrastructure vocabulary include distributed ledgers (aka blockchains), composable resources and ephemeral instance storage (storage on a cloud instance).

May NVMe Momentum Movement Activity

May saw a lot of NVMe related activity, from chips and components (adapters, devices) to systems spanning direct attached to NVMe over Fabric (NVMeoF). Here is a primer (or refresh) for NVMe along with various deployment options. NVMeoF includes RDMA over Converged Ethernet (RoCE) based, along with NVMe over Fibre Channel (FC-NVMe), as well as emerging NVMe over IP.

NVMe options
NVMe being used for front-end accessed via shared PCIe along with back-end devices

There are many different facets of NVMe including for use as a front-end on storage systems supporting server attachment (e.g. competes with Fibre Channel, iSCSI, SAS among others). Another variation of NVMe is as a back-end for attachment of drives or other NVMe based devices in storage systems, as well as servers.

NVMe backend
Front-end using traditional block SAN access with back-end NVMe, SAS and SATA devices

Read more about the many different options and variations of NVMe including key questions to ask or understand, deployment topology along with other related topics at thenvmeplace.com.

NVMe frontend NVMeoF
Various NVMe front-end including NVMeoF along with NVMe back-end devices (U.2, M.2, AiC)

Software Defined Data Infrastructure Activity

Amazon Web Services (AWS) continues to add new features, functionality as well as extending those as along with existing capabilities into various regions. Some recent updates include new Elastic Cloud Compute (EC2) Microsoft Windows Servers versions 1709 and 1803 Amazon Machine Images (AMIs). Other AWS updates include spot instances support for Red Hat BYOL (Bring Your Own License), VPN enhancements, X1e instances available in Frankfurt, H1 instance price reduction, as well as LightSail now in Canada, Paris, and Seoul regions.

For those who are not familiar with LightSail, they are virtual private servers (VPS) which are different from traditional EC2 instances. LightSail can be a cost-effective way for those who need to move out of general population shared hosting, yet cannot justify a full EC2 instance while requiring more than a container.

The LightSail instance also is available with various software pre-installed such as for WordPress websites among others. For example, I have used LightSail as a backup and standby WordPress site for StorageIOblog using Updraft Plus  Pro for data protection.

In other news, AWS C5d EC2 instances are available in various regions. C5d instances are available with 2, 4, 8, 16, 36 and 72 vCPUs along with up to 1800GB of NVMe based ephemeral storage for on-demand reserved or spot instances.

Note that instance-based storage is temporary meaning that it persists for the life of the instance. What this means is that if you stop and restart the instance, the data is not persistence. Instance-based storage is useful for data that can be protected or persisted to other storage including EBS (Elastic Block Storage). Usage includes batch, log and analytics processing, burst buffers, cache or workspace.

AWS also announced a new Simple Storage Service (S3) storage class a month or so ago called One Zone Availability Infrequent Access. This new storage class primarily provides a lower cost of storage with lower durability (e.g., data spread across one zone vs. multiple). Over the past couple of months, I have been migrating from S3 Infrequent Access (IA) as well as standard into One Zone Availability. Some of my active data remains in S3 Standard storage class, while cold archives are in Glacier.

A tip about migrating to One Zone Availability, as well as between other S3 storage classes is paid attention to your API calls and monthly budget. You might see an increase in S3 costs during the migration time, that then settles into the lower prices once data has been moved due to API calls (gets, puts, lists, dir). In other words, pay attention to how many API calls you are allowed per storage class per month, along with other fees beyond focusing only on cost per TByte. Read about other recent AWS news updates here.

Software-defined storage startup Cloudian announced their technology available for test drive on Google Cloud Platform as part of a continued industry trend. That trend is for storage vendors to make their storage software technology available on different cloud platforms such as AWS, Azure, Google, Softlayer among others.

Dell Technology World 2018

Dell Technologies made several announcements as part of Dell Technologies World that are covered in a series of posts here. Announcements included PowerMax the successor to VMAX, XtremIO X2 updates, new servers, workstations among many other items, read more here.

Besides the data infrastructure, cloud service providers and systems vendors, component suppliers including Cavium announced NVMe over Fibre Channel updates (here and here), along with Marvel NVMe updates here. HPE announced new thin clients and software (t430 Thin Client, HP mt44 Mobile Thin Client, HP ThinPro software), as well as updates to 3PAR and other storage solutions.

IBM announced various storage enhancements (and here) as well as a Happy 30th anniversary to the IBM Power9 based i systems. In other news, Kaseya bought backup data protection vendor Unitrends.

NVMe NAND flash Intel Optane

Micron announced the first quad layer cell (QLC) nand flash solid state device (SSD) named 52100 has begun shipping to select customers (and vendors). QLC packs or stacks 4 bits per cell. The 5200 is optimized for read-intensive workloads with up to 33% higher densities compared to previous generation TLC (triple layer cell) NAND flash. Broader market availability is expected to occur later fall 2018, 5210 form factor is 2.5” as a standard SSD or HDD, with capacities from 1.92TB to 7.68TB.

In other news, Micron also announced a $10 Billion (USD) stock repurchase plan, along with an extension of Intel 3D NAND flash memory partnership involving 3D NAND flash, as well as 96 layer 3D NAND. Meanwhile, various vendors are increasingly talking about how their systems are or will be storage class memory (SCM) ready including for use such as Micron 3D XPoint also known as Intel Optane among others.

Microsoft has placed into public preview Azure Active Directory (AAD) Storage authentication for Azure Blobs and Queues. Azure Storage Explorer is now released as version 1.0. AAD storage authentication enables organizations to implement role-based access control of Azure storage resources. Speaking of Azure, Microsoft has published several architectures, reference and other content at the Azure Virtual Datacenter portal here.

If you have not done so, check out Azure File Sync which is currently in public preview. Having been involved and using it for over a year including during private preview, Azure File Sync is an exciting, useful technology for creating a hybrid distributed file sharing with cloud tiering solutions. Learn more Azure File Sync here and here. In other news, Microsoft has announced a preview as part of the April 2018 Windows 10 build for a Hyper-V Google Android emulator support.

NetApp has had Azure based NAS storage in preview for a while now, and also announced Cloud Volumes on Google Cloud Platform (GCP). In addition to Cloud Volumes on AWS, Azure, and GCP, NetApp also announced enhanced NVMe based storage systems among other updates.

Two companies that have similar names are Opendrives (video workflow acceleration) and Opendrive (cloud storage, backup, and data protection). Meanwhile, data infrastructure startup Pavilion has received new funding as well as begun talking about their NVMe including NVMe over Fabric (NVMeOF) hardware storage system. Long-time data infrastructure converged server storage startup Pivot3 announced additional cloud workload mobility.

Pure storage made a couple of announcements including  FlashArray//X NVMe based shared accelerated storage system as well as NVIDIA (GPU powered) based AIRI Mini for AI/DL/ML.

Have you heard about Snowflake computing, aka, the cloud data warehouse solution? If not, check them out here. Another cloud-related data infrastructure vendor to look into is Upbound.io who have received additional funding for their multi-cloud management solutions.

Building off of recent VMware vSphere updates (here), and Dell Technology World here, the following is an excellent post about Instant Clone in vSphere 6.7, and VMware vSAN HCI assessment tool here.

Check out other industry news, comments, trends perspectives here.

Data Infrastructure Server StorageIO Comments Content

Server StorageIO Commentary in the news, tips and articles

Recent Server StorageIO industry trends perspectives commentary in the news.

Via SearchStorage: Comments Managing storage for IoT data at the enterprise edge
Via SearchCloudComputing: Comments Hybrid cloud deployment demands a change in security mindset
Via SearchStorage: Comments Dell EMC storage IPO, VMware merger plans still unclear
Via SearchStorage: Comments Dell EMC midrange storage keeps its overlapping arrays
Via SearchStorage: Comments Dell EMC all-flash PowerMax replaces VMAX, injects NVMe
Via IronMountain InfoGoto:  The growing Trend of Secondary Data Storage

View more Server, Storage and I/O trends and perspectives comments here.

Data Infrastructure Server StorageIOblog posts

Server StorageIOblog Data Infrastructure Posts

Recent and popular Server StorageIOblog posts include:

Dell Technology World 2018 Announcement Summary
Part II Dell Technology World 2018 Modern Data Center Announcement Details
Part III Dell Technology World 2018 Storage Announcement Details
Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure
Part V Dell Technology World 2018 Server Converged Announcement Details
April 2018 Server StorageIO Data Infrastructure Update Newsletter
VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary
PCIe Fundamentals Server Storage I/O Network Essentials
Have you heard about the new CLOUD Act data regulation?
Data Protection Recovery Life Post World Backup Day Pre GDPR
Microsoft Windows Server 2019 Insiders Preview
Application Data Value Characteristics Everything Is Not The Same
Data Infrastructure Resource Links cloud data protection tradecraft trends
IT transformation Serverless Life Beyond DevOps Podcast
Data Protection Diaries Fundamental Topics Tools Techniques Technologies Tips
Introducing Windows Subsystem for Linux WSL Overview
Data Infrastructure Primer Overview (Its Whats Inside The Data Center)
If NVMe is the answer, what are the questions?

View other recent as well as past StorageIOblog posts here

Server StorageIO Recommended Reading (Watching and Listening) List

Software-Defined Data Infrastructure Essentials SDDI SDDC

In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017) available at Amazon.com (check out special sale price), the following are Server StorageIO data infrastructure recommended reading, watching and listening list items. The Server StorageIO data infrastructure recommended reading list includes various IT, Data Infrastructure and related topics including Intel Recommended Reading List (IRRL) for developers is a good resource to check out. Speaking of my books, Didier Van Hoye (@WorkingHardInIt) has a good review over on his site you can view here, also check out the rest of his great content while there.

Containers, serverless, kubernetes continue to gain in industry adoption, as well as customer deployments. Here is some information about Microsoft Azure Kubernetes Service (AKS). Note that AWS has Elastic Kubernetes Service (EKS), Google, VMware and Pivotal with Pivotal Kubernetes Service (PKS) among others.

Here is an interesting perspective by Ben Kepps about Serverless (e.g. life beyond Kubernetes and containers (e.g. life beyond virtualization which to some is or was life (e.g. life beyond bare metal))) as well as the all to often punditry, evangelism of something new causing something else to be dead.

SNIA has updated their Emerald aka Green energy effectiveness (focus on productivity) measurement specification (V3.01) including NAS NFS file activity (besides block). Learn more at snia.org/forums/green.

Watch for more items to be added to the recommended reading list book shelf soon.

Data Infrastructure Server StorageIO event activities

Events and Activities

Recent and upcoming event activities.

June 27, 2018 – Webinar – TBA

May 29, 2018 – Webinar – Microsoft Windows as a Service

April 24, 2018 – Webinar – AWS and on-site, on-premises hybrid data protection

See more webinars and activities on the Server StorageIO Events page here.

Data Infrastructure Server StorageIO Industry Resources and Links

Various useful links and resources:

Data Infrastructure Recommend Reading and watching list
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/downloads – Various presentations and other download material
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

Connect and Converse With Us

Storage IO RSS storageio linkedin storageio facebook Server StorageIO on twitter @StorageIO   Google+  Server StorageIO email storageio youtube  storageio instagram

Subscribe to Newsletter – Newsletter Archives StorageIO.comStorageIOblog.com

What this all means and wrap-up

Data Infrastructures are what exists inside physical data centers spanning cloud, converged, hyper-converged, virtual, serverless and other software defined as well as legacy environments. So far this spring there has been a lot of data infrastructure related activity, from new technology announcements, to events, trends among others. Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter and watch for more NVMe, Gen-Z, cloud, data protection among other topics in future posts, articles, events, and newsletters.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

VMware vSphere vSAN vCenter v6.7 SDDC details

VMware vSphere vSAN vCenter v6.7 SDDC details

VMware vSphere vSAN vCenter v6.7 SDDC details

VMware vSphere vSAN vCenter v6.7 SDDC details of announcement summary focus on vCenter, Security, and management. This is part two (part one here) of a three-part (part III here) series looking at VMware vSphere vSAN vCenter v6.7 SDDC details of announcement summary.

Last week VMware announced vSphere vSAN vCenter v6.7 updates as part of enhancing their software-defined data center (SDDC) and software-defined infrastructure (SDI) solutions core components. This is an expanded post as a companion to the Server StorageIO summary piece here. These April updates followed those from this past March when VMware announced cloud enhancements with partner AWS (more on that announcement here).

VMware vSphere 6.7
VMware vSphere Web Client with vSphere 6.7

What VMware announced is generally available (GA) meaning you can now download from here the bits (e.g., software) that include:

  • ESXi aka vSphere 6.7 hypervisor build 8169922
  • vCenter Server 6.7 build 8217866
  • vCenter Server Appliance 6.7 build 8217866
  • vSAN 6.7 and other related SDDC management tools
  • vSphere Operations Management (vROps) 6.7

For those not sure or need a refresher, vCenter Server is the software for extended management across multiple vSphere ESXi hypervisors that run on a Windows platform.

Major themes of the VMware April announcements are focused around:

  • Increased enterprise and hybrid cloud scalability
  • Resiliency, availability, durable and secure
  • Performance, efficiency and elastic
  • Intuitive, simplified management at scale
  • Expanded support for demanding application workloads

Expanded application support includes for traditional demanding enterprise IT, along with High-Performance Compute (HPC), Big Data, Little Data, Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL), as well as other emerging workloads. Part of supporting demanding workloads includes enhanced support for Graphical Processing Units (GPU)such as those from Nvidia among others.

What was announced

As mentioned above and in other posts in this series, VMware announced new versions of their ESXi hypervisor vSphere v6.7, as well as virtual SAN (vSAN) v6.7, virtual Center (vCenter),  v6.7 among other related tools. One of the themes of this announcement by VMware includes hybrid SDDC spanning on-site, on-premises (or on-premisess if you prefer) to the public cloud. Other topics involve increasing scalability, along with stability as well as ease of management along with security, performance updates.

As part of the v6.7 enhancements, VMware is focusing on simplifying, as well as accelerating software-defined data infrastructure along with other SDDC lifecycle operation activities. Additional themes and features focus on server, storage, I/O resource enablement, as well as application extensibility support.

vSphere ESXi hypervisor

With v6.7 ESXi host maintenance times improved with single reboot vs. previous multiple boots for some upgrades, as well as quick boot. Quick boot enables restarting the ESXi hypervisor without rebooting the physical machine skipping time-consuming hardware initialization.

Enhanced HTML5 based vSphere client GUI (along with API and CLI) with increased feature function parity compared to predecessor versions and other VMware tools. Increased functionality includes NSX, vSAN and VMware Upgrade Management (VUM) capabilities among others. In other words, not only are new technologies support, functions you may have in the past resisted using the web-based interfaces due to extensibility are being addressed with this release.

vCenter Server and vCenter Server Appliance (VCSA)

VMware has announced that moving forward the hosted (e.g., running on a Windows server platform) version is being depreciated. What this means is that it is time for those not already doing so to migrate to the vCenter Server Appliance (VCSA). As a refresher, VCSA is a turnkey software-defined virtual appliance that includes vCenter Server software running on VMware Photon Linux operating system as a virtual machine. VMware vCenter.

As part of the update, the enhanced vCenter Server Appliance (VCSA) supports new efficient, effective API management along with multiple vCenters as well as performance improvements. VMware cites 2x faster vCenter operations per second, 3x reduction in memory usage along with 3x quicker Distributed Resource Scheduler (DRS) related activities across powered on VMs).

What this means is that VCSA is a self-contained virtual appliance that can be configured for very large, large, medium and small environments in various configurations. With v6.7 vCenter Server Appliance emphasis on scaling, as well as performance along with security and ease of use features, VCSA is better positioned to support large enterprise deployments along with hybrid cloud. VCSA v6.7 is more than just a UI enhancement with v6.5 shown below followed by an image of v6.7 UI.

VMware vSphere 6.5
VMware vCenter Appliance v6.5 main UI

VMware vSphere 6.7
VMware vCenter Appliance v6.7 main UI

Besides UI enhancements (along with API and CLI) for vCenter, other updates include more robust data protection (aka backup) capability for the vCenter Server environment. In the prior v6.5 version there was a fundamental capability to specify a destination for sending vCenter configuration information to for backup data protection (See image below).

vCenter 6.5 backup
VMware vCenter Appliance 6.5 backup

Note that the VCSA backup only provides data protection for the vCenter Appliance, its configuration, settings along with data collected of the VMware hosts (and VMs) being managed. VCSA backup does not provide data protection of the individual VMware hosts or VMs which is accomplished via other data protection techniques, tools and technologies.

In v6.7 vCenter now has enhanced capabilities (shown below) for enabling data protection of configuration, settings, performance and other metrics. What this means is that with improved UI it is now possible to setup backup schedules as part of enabling automation for data protection of vCenter servers.

vCenter 6.7 backup
VMware VCSA v6.7 enhanced UI and data protection aka backup

The following shows some of the configuration sizing options as part of VCSA deployment. Note that the vCPU, Memory, and Storage are for the VCSA itself to support a given number of VMware hosts (e.g., physical machines) as well as guest virtual machines (VM).

 

VCSA

VCSA

VCSA

VM

 

Size

vCPU

Memory

Storage

Hosts

VMs

Tiny

2

10GB

300GB

10

100

Small

4

16GB

340GB

100

1000

Medium

8GB

24

525GB

400

4000

Large

16

32GB

740GB

1000

10000

Extra Large

24

48GB

1180GB

2000

35000

vCenter 6.7 sizing and number of the physical machine (e.g., VM hosts) and virtual machines supported

Keep in mind that in addition to the above individual VCSA configuration limits, multiple vCenters can be grouped including linked mode spanning onsite, on-premisess (on-prem if you prefer) as well as the cloud. VMware vCenter server hybrid linked mode enables seamless visibility and insight across on-site, on-premises (or on-premisess if you prefer) as well as public clouds such as AWS among others.

In other words, vCenter with hybrid linked mode enables you to have situational awareness and avoid flying blind in and among clouds. As part of hybrid vCenter environment support, cross-cloud (public, private) hot and cold migration including clone as well as vMotion across mixed VMware version provisioning is supported. Using linked mode multiple roles, permissions, tags, policies can be managed across different groups (e.g., unified management) as well as locations.

VMware and vSphere Security

Security is a big push for VMware with this release including Trusted Platform Module (TPM) 2.0 along with Virtual TPM 2.0 for protecting both the hypervisors and guest operating systems. Data encryption was introduced in vSphere 6.5 and is enhanced with increased management simplicities along with protection of data at rest and in flight (while in motion).

In other words, encrypted vMotion across different vCenter instances and versions are supported, as well as across hybrid environments (e.g., on-premises and public cloud). Other security enhancements include tighter collaboration and integration with Microsoft for Windows VMs, as well as vSAN, NSX and vRealize for a secure software-defined data infrastructure aka SDDC. For example, VMware has enhanced support for Microsoft Virtualization Based Security (VBS) including credential Guard where vSphere is providing a secure virtual hardware platform.

Additional VMware 6.7 security enhancements include Multiple SYSLOG targets, FIPS 140-2 Validated modules. Note that there is a difference between FIPS certified and FIPS validated, of which VMware vCenter and ESXi leverage two modules (VM Kernel Cryptographic, and OpenSSL) are currently validated. VMware is not playing games like some vendors when it comes to disclosing FIPS 140-2 validated vs. certified. Other VMware security enhancements include

Note, when a vendor mentions FIPS 140-2 and imply or says certified, ask them if they indeed are certified. Any vendor who is actually FIPS 140-2 certified should not get upset if you press them politely. Instead, they should thank you for asking. Otoh, if a vendor gives you a used car salesperson style dance or get upset, ask them why so sensitive, or, perhaps, what are they ashamed of or hiding, just saying. Learn more here.

vRealize Operations Manager (vROps)

vRealize Operations Manager (vROps) v6.7 dashboard for vSphere client plugin provides an overview of cluster view and alerts of both vCenter and vSAN. What this means is that you will want to upgrade vROps to v6.7. The vROps benefit being dashboards for optimal performance, capacity, troubleshooting, and management configuration.

Where to learn more

Learn more about VMware vSphere, vCenter, vSAN and related software-defined data center (SDDC); software-defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

VMware continues to enhance their core SDDC data infrastructure resources to support new and emerging, as well as legacy enterprise applications at scale. VMware enhancements include management, security along with other updates to support the demanding needs of various applications and workloads, along with supporting application developers.

Some examples of demanding workloads include among others AL, Big Data, Machine Learning, In memory and high-performance compute (HPC) among other resource-intensive new workloads, as well as existing applications. This includes enhanced support for Nvidia physical and virtual Graphical Processing Units (GPU) that are used in support for compute-intensive graphics, as well as non-graphic processing (e.g., AI, ML) workloads.

With the v6.7 announcements, VMware is providing proof points that they are continuing to invest in their core SDDC enabling technologies. VMware is also demonstrating the evolution of vSphere ESXi hypervisor along with associated management tools for hybrid environments with ease of use management at scale, along with security.  View more about VMware vSphere vSAN vCenter v6.7 SDDC details in part three of this three-part series here ((focus on server storage I/O, deployment information and analysis).

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

VMware vSphere vSAN vCenter Server Storage I/O Enhancements

VMware vSphere vSAN vCenter Server Storage I/O Enhancements

VMware vSphere vSAN vCenter Server Storage I/O Enhancements

This is part three of a three-part series looking at last weeks v6.7 VMware vSphere vSAN vCenter Server Storage I/O Enhancements. The focus of this post is on server, storage, I/O along with deployment and other wrap up items. In case you missed them, read part one here, and part two here.

VMware as part of updates to, vSAN and vCenter introduced several server storage I/O enhancements some of which have already been mentioned.

VMware vSphere 6.7
VMware vSphere Web Client with vSphere 6.7

Server Storage I/O enhancements for vSphere, vSAN, and vCenter include:

  • Native 4K (4kn) block sector size for HDD and SSD devices
  • Intel Volume Management Device (VMD) for NVMe flash SSD
  • Support for Persistent Memory (PMEM) aka Storage Class Memory (SCM)
  • SCSI UNMAP (similar to TRIM) for SSD space reclamation
  • XCOPY and VAAI enhancements
  • VMFS-5 is now default file system
  • VMFS-6 SESparse vSphere snapshot space reclamation
  • VVOL supporting SCSI-3 persistent reservations and IPv6
  • Reduce dependences on RDMs with VVOL enhancements
  • Software-based Fibre Channel over Ethernet (FCoE) initiator
  • Para Virtualized RDMA (PV-RDMA)
  • Various speeds and feeds enhancements

VMware vSphere 6.7 also adds native 4KN sector size (e.g., 4096 block size) in addition to traditional native and emulated 512-byte sectors for HDD as well as SSD. The larger block size means performance improvements along with better storage allocation for applications, particularly for large capacity devices. Other server storage I/O updates include RDMA over Converged Ethernet (RoCE) enabled Remote Direct Memory Access (RDMA) as well as Intel VMD for NVMe. Learn more about NVMe here.

Other storage-related enhancements include SCSI UNMAP (e.g., SCSI equivalent of SSD TRIM) with the selectable priority of none or low for SSD space reclamation. Also enhanced are SESparse of vSphere snapshot virtual disk space reclamation (for VMFS-6). VMware XCOPY (Extended Copy) now works with vendor-specific VMware API for Array Integration (VAAI) primitives along with SCSI T10 standard used for cloning, zeroing and copy offload to storage systems. Virtual Volumes (VVOL) have been enhanced to support IPv6 and SCSI-3 persistent reservations to help reduce dependency or use of RDMs.

VMware configuration maximums (e.g., speeds and feeds) including server storage I/O enhancements including boosting from 512 to 1024 LUNs per host. Other speeds and feeds improvements include going from 2048 to 4096  server storage I/O paths per host, PVSCSI adapters now support up to 256 disks vs. 64 (virtual disks or Raw Device Mapped aka RDM). Also note that VMFS-3 is now the end of life (EOL) and will be automatically upgraded to VMFS-5 during the upgrade to vSphere 6.7, while the default datastore type is VMFS-6.

Additional server storage I/O enhancements include RoCE for RDMA enabling low latency server to server memory-based data movement access, along with Para-virtualized RDMA (PV-RDMA) on Linux guest OS. ESXi has been enhanced with iSER (iSCSI Extension for RDMA) leveraging faster server I/O interconnects and CPU offload. Another server storage I/O enhancement is Software based Fibre Channel over Ethernet (e.g., SW-FCoE) initiator using loss less Ethernet fabrics.

Note as a reminder or refresher that VMware also has para (e.g., virtualization-optimized) drivers for Ethernet and other networks, NVMe as well as SCSI in addition to standard devices. For example, you can access from a VM an NVMe backed datastore using standard VMware SATA, SCSI Controller, LSI Logic SAS, LSI Logic Parallel, VMware Paravirtual, native NVMe driver (virtual machine type 6.5 or higher) for better performance. Likewise, instead of using the standard SAS and SCSI VM devices, the VMware para-virtualized

Besides the previously mentioned items, other enhancements including for vSAN include support for logical clusters such as Oracle RAC, Microsoft SQL Server Availability Groups, Microsoft Exchange Data Availability Groups as well as Windows Server Failover Clusters (WSFC) using vSAN iSCSI service. Note that as a proof point of continued vSAN deployment customer adoption, VMware is claiming 10,000 deployments. For performance, vSAN enhancement also includes updates for adaptive placement, adaptive resync, as well as faster cache destage. The benefit of quicker destage is that cache can be drained or written to disk to eliminate or prevent I/O bottlenecks.

As part of supporting expanding, more demanding enterprise among other workloads, vSAN enhancements also include resiliency updates, physical resource and configuration checks, health and monitoring checks. Other vSAN improvements include streamlined workflows, converged management views across vCenter as well as vRealize tools. Read more from VMware about server storage I/O enhancements to vSphere, vSAN, and vCenter here.

VMware Server Storage I/O Memory Matters

VMware is also joining others with support for evolving persistent memory (PMEM) leveraging so-called storage class memories (SCM). Note, some refer to SCM as persistent memory as PM, however, context needs to be used as PM also means Physical Machine, Physical Memory, Primary Memory among others. With the new PMEM support for server memory, VMware is laying the foundation for guest operating systems as well as applications to leverage the technology.

For example, Microsoft with Windows Server 2016 supports SCMs as a block addressable storage medium and file system, as well as for Direct Access (e.g., DAX). What this means is that fast file systems can be backed by persistent faster than traditional SSD storage, as well as applications such as SQL Server that support DAX can do direct persistent I/O.

As a refresher, Non-Volatile DIMM enable server memory by combing traditional DRAM with some persistent storage class memory. By combing DRAM and storage class memory (SCM) also known as PMEM servers can use the RAM as a fast read/write memory, with the data destaged to persistent memory. Examples of SCM include Micron 3D Xpoint also known as Intel Optane along with others such as Everspin NVDIMM among others (available from Dell, HPE among others. Learn more SSD and storage class memories (SCM) along with PMEM here, as well as NVMe here.

Deployment, be prepared before you grab the bits and install the software

For those of you who want or need to download the bits here is a link to VMware software download. However, before racing off to install the new software in your production (or perhaps even lab), do your homework. Read the important information from VMware before upgrading to vSphere here (e.g., KB53704) as well as release notes, and review VMware’s best practices for upgrading to vCenter here.

Some of the things to be aware of including upgrade order and dependencies, as well as make sure you have good current backups of your vSphere ESXi configuration, vCenter appliance. In addition to viewing the vSphere ESXi and vCenter 6.7 release notes here, also.

There are some hardware compatibility items you need to be aware of, both for this as well as future versions. Check out the VMware hardware (and software) compatibility list (HCL), along with partner product interoperability matrices, as well as release notes. Pay attention to devices depreciated and no longer supported in ESXi 6.7 (e.g., VMware KB52583) as well as those that may not work in future releases to avoid surprises.

Where to learn more

Learn more about VMware vSphere, vCenter, vSAN and related software-defined data center (SDDC); software-defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

In case you missed them, read part one here and click here for part two of this series.

Some will say what’s the big deal why all the noise, coverage and discussion for a point release?

My view is that this is a big evolutionary package of upgrade enhancements and new features, even if a so-called point release (e.g., going from 6.5 to 6.7). Some vendors might have done this type of updates as a significant, e.g., version 6.x to 7.x upgrade to make more noise, get increased coverage or merely enhance the appearance of software maturity (e.g., V1.x to V2.x to V3.x, and so forth).

In the case of VMware, what some might refer to point release that is smaller, are the ones such as vSphere 6.5.0 to 6.5.x among others. Thus, there is a lot in this package of updates from VMware and good to see continued enhancements.

I also think that VMware is getting challenges from different fronts including Microsoft as well as cloud partners among others which is good. The reason I believe that it is okay VMware is being challenged is given their history; they tend to step up their game playing harder as well as stronger with the competition.

VMware is continuing to invest and extend its core SDDC technologies to meet the expanding demands of various organizations, from small to ultra large enterprises. What this means is that VMware is addressing ease of use for smaller, as well as removing complexity to enable simplified scaling from on-site (or on-premises and on-prem if you prefer) to the public cloud.

Overall the VMware Announced version 6.7 of vSphere vSAN vCenter SDDC core components are a useful extension of their existing technology. VMware Announced release 6.7 of vSphere vSAN vCenter SDDC core components enhancements enable customers more flexibility, scalability, resiliency, and security to meet their various needs.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Windows Server 2019 Insiders Preview

Microsoft Windows Server 2019 Insiders Preview

Application Data Value Characteristics Everything Is Not The Same

Microsoft Windows Server 2019 Insiders Preview has been announced. Windows Server 2019 in the past might have been named 2016 R2 also known as a Long-Term Servicing Channel (LTSC) release. Microsoft recommends LTSC Windows Server for workloads such as Microsoft SQL Server, Share Point and SDDC. The focus of Microsoft Windows Server 2019 Insiders Preview is around hybrid cloud, security, application development as well as deployment including containers, software defined data center (SDDC) and software defined data infrastructure, as well as converged along with hyper-converged infrasture (HCI) management.

Windows Server 2019 Preview Features

Features and enhancements in the Microsoft Windows Server 2019 Insiders Preview span HCI management, security, hybrid cloud among others.

  • Hybrid cloud – Extending active directory, file server synchronize, cloud backup, applications spanning on-premises and cloud, management).
  • Security – Protect, detect and respond including shielded VMs, attested guarded fabric of host guarded machines, Windows and Linux VM (shielded), VMConnect for Windows and Linux troubleshooting of Shielded VM and encrypted networks, Windows Defender Advanced Threat Protection (ATP) among other enhancements.
  • Application platform – Developer and deployment tools for Windows Server containers and Windows Subsystem on Linux (WSL). Note that Microsoft has also been reducing the size of the Server image while extending feature functionality. The smaller images take up less storage space, plus load faster. As part of continued serverless and container support (Windows and Linux along with Docker), there are options for deployment orchestration including Kubernetes (in beta). Other enhancements include extending previous support for Windows Subsystem for Linux (WSL).

Other enhancements part of Microsoft Windows Server 2019 Insiders Preview include cluster sets in support of software defined data center (SDDC). Cluster sets expand SDDC clusters of loosely coupled grouping of multiple failover clusters including compute, storage as well as hyper-converged configurations. Virtual machines have fluidity across member clusters within a cluster set and unified storage namespace. Existing failover cluster management experiences is preserved for member clusters, along with a new cluster set instance of the aggregate resources.

Management enhancements include S2D software defined storage performance history, project Honolulu support for storage updates, along with powershell cmdlet updates, as well as system center 2019. Learn more about project Honolulu hybrid management here and here.

Microsoft and Windows LTSC and SAC

As a refresher, Microsoft Windows (along with other software) is now being released on two paths including more frequent semi-annual channel (SAC), and less frequent LTSC releases. Some other things to keep in mind that SAC are focused around server core and nano server as container image while LTSC includes server with desktop experience as well as server core. For example, Windows Server 2016 released fall of 2016 is an LTSC, while the 1709 release was a SAC which had specific enhancements for container related environments.

There was some confusion fall of 2017 when 1709 was released as it was optimized for container and serverless environments and thus lacked storage spaces direct (S2D) leading some to speculate S2D was dead. S2D among other items that were not in the 1709 SAC are very much alive and enhanced in the LTSC preview for Windows Server 2019. Learn more about Microsoft LTSC and SAC here.

Test Driving Installing The Bits

One of the enhancements with LTSC preview candidate server 2019 is improved upgrades of existing environments. Granted not everybody will choose the upgrade in place keeping existing files however some may find the capability useful. I chose to give the upgrade keeping current files in place as an option to see how it worked. To do the upgrade I used a clean and up to date Windows Server 2016 data center edition with desktop. This test system is a VMware ESXi 6.5 guest running on flash SSD storage. Before the upgrade to Windows Server 2019, I made a VMware vSphere snapshot so I could quickly and easily restore the system to a good state should something not work.

To get the bits, go to Windows Insiders Preview Downloads (you will need to register)

Windows Server 2019 LTSC build 17623 is available in 18 languages in an ISO format and require a key.

The keys for the pre-release unlimited activations are:
Datacenter Edition         6XBNX-4JQGW-QX6QG-74P76-72V67
Standard Edition             MFY9F-XBN2F-TYFMP-CCV49-RMYVH

First step is downloading the bits from the Windows insiders preview page including select language for the image to use.

Getting the windows server 2019 preview bits
Select the language for the image to download

windows server 2019 select language

Starting the download

Once you have the image download, apply it to your bare metal server or hypervisors guest. In this example, I copied the windows server 2019 image to a VMware ESXi server for a Windows Server 2016 guest machine to access via its virtual CD/DVD.

pre upgrade check windows server version
Verify the Windows Server version before upgrade

After download, access the image, in this case, I attached the image to the virtual machine CD, then accessed it and ran the setup application.

Microsoft Windows Server 2019 Insiders Preview download

Download updates now or later

license key

Entering license key for pre-release windows server 2019

Microsoft Windows Server 2019 Insiders Preview datacenter desktop version

Selecting Windows Server Datacenter with Desktop

Microsoft Windows Server 2019 Insiders Preview license

Accepting Software License for pre-release version.

Next up is determining to do a new install (keep nothing), or an in-place upgrade. I wanted to see how smooth the in-place upgrade was so selected that option.

Microsoft Windows Server 2019 Insiders Preview inplace upgrade

What to keep, nothing, or existing files and data


Confirming your selections

Microsoft Windows Server 2019 Insiders Preview install start

Ready to start the installation process

Microsoft Windows Server 2019 Insiders Preview upgrade in progress
Installation underway of Windows Server 2019 preview

Once the installation is complete, verify that Windows Server 2019 is now installed.

Microsoft Windows Server 2019 Insiders Preview upgrade completed
Completed upgrade from Windows Server 2016 to Microsoft Windows Server 2019 Insiders Preview

The above shows verifying the system build using Powershell, as well as the message in the lower right corner of the display. Granted the above does not show the new functionality, however you should get an idea of how quickly a Windows Server 2019 preview can be deployed to explore and try out the new features.

Where to learn more

Learn more Microsoft Windows Server 2019 Insiders Preview, Windows Server Storage Spaces Direct (S2D), Azure and related software defined data center (SDDC), software defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Microsoft Windows Server 2019 Insiders Preview gives a glimpse of some of the new features that are part of the next evolution of Windows Server as part of supporting hybrid IT environments. In addition to the new features and functionality that convey not only support for hybrid cloud, also hybrid applications development, deployment, devops and workloads, Microsoft is showing flexibility in management, ease of use, scalability, along with security as well as scale out stability. If you have not looked at Windows Server for a while, or involved with serverless, containers, Kubernetes among other initiatives, now is a good time to check out Microsoft Windows Server 2019 Insiders Preview.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

March 2018 Server StorageIO Data Infrastructure Update Newsletter

March 2018 Server StorageIO Data Infrastructure Update Newsletter

Server and StorageIO Update Newsletter

Volume 18, Issue 3 (March 2018)

Hello and welcome to the March 2018 Server StorageIO Data Infrastructure Update Newsletter.

If you are wondering where the January and February 2018 update newsletters are, they are rolled into this combined edition. In addition to the short email version (free signup here), you can access full versions (html here and PDF here) along with previous editions here.

In this issue:

Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

Data Infrastructure Data Protection and Backup BC BR DR HA Security

World Backup day is coming up on March 31 which is a good time to remember to verify and validate that your data protection is working as intended. On one hand I think it is a good idea to call out the importance of making sure your data is protected including backed up.

On the other hand data protection is not a once a year, rather a year around, 7 x 24 x 365 day focus. Also the focus needs to be on more than just backup, rather, all aspects of data protection from archiving to business continuance (BC), business resiliency (BR), disaster recovery (DR), always on, always accessible, along with security and recovery.

Data Infrastructure Data Protection Backup 4 3 2 1 rule
Data Infrastructure 4 3 2 1 Data Protection and Backup

Some data spring thoughts, perspectives and reminders. Data lakes may swell beyond their banks causing rivers of data to flood as they flow into larger reservoirs, great data lakes, gulfs of data, seas and oceans of data. Granted, some of that data will be inactive cold parked like glaciers while others semi-active floating around like icebergs. Hopefully your data is stored on durable storage solutions or services and does not melt.

Data Infrastructure Server Storage I/O flash SSD NVMe
Various NAND Flash SSD devices and SAS, SATA, NVMe, M.2 interfaces

Non-Volatile Memory (NVM) including various solid state device (SSD) mediums (e.g. nand flash, 3D XPoint, MRAM among others), packaging (drives, PCIe Add in cars [AiC] along with entire systems, appliances or arrays). Also part of the continue evolution of NVM, SSD and other persistent memories (PM) including storage class memories (SCM) are different access protocol interfaces.

Keep in mind that there is a difference between NVM (medium) and NVMe (access), NVM is the generic category of mediums or media and devices such as nand flash, nvram, 3D XPoint among others SCM (and PMs). In other words, NVM is what data devices use for storing data, NVMe is how devices and systems are accessed. NVMe and its variations is how NVM, SSD, PM, SCM media and devices get accessed locally, as well as over network fabrics (e.g. NVMe-oF an FC-NVMe).

NVMe continues to evolve including with networked fabric variations such as RDMA based NVMe over Fabric (NVMe-oF), along with Fibre Channel based (FC-NVMe). The Fibre Channel Industry Association trade group recently held its second multi-vendor plugfest in support of NVMe over Fibre Channel.

Read more about NVM, NVMe, SSD, SCM, flash and related technologies, tools, trends, tips via the following resources:

Has Object Storage failed to live up to its industry hype lacking traction? Or, is object storage (also known as blobs) progressing with customer adoption and deployment on normal realistic timelines? Recently I have seen some industry comments about object storage not catching on with customers or failing to live up to its hyped expectation. IMHO object storage is very much alive along with block, file, table (e.g. database SQL and NoSQL repositories), message/queue among others, as well as emerging blockchain aka data exchanges.

Various Industry and Customer Adoption Deployment timeline
Various Industry and Customer Adoption Deployment Timeline (Via: StorageIOblog.com)

An issue with object storage is that it is still new, still evolving, many IT environments applications do not yet speak or access objects and blobs natively. Likewise as is often the case, industry adoption and deployment is usually early and short term around the hype, vs. the longer cycle of customer adoption and deployment. The downside for those who only focus on object storage (or blobs) is that they may be under pressure to do things short term instead of adjusting to customer cycles which take longer, however real adoption and deployment also last longer.

While the hype and industry buzz around object storage (and blobs) may have faded, customer adoption continues and is here to stay, along with block, file among others, learn more at www.objectstoragecenter.com. Also keep in mind that there is a difference between industry and customer adoption along with deployment.

Some recent Industry Activities, Trends, News and Announcements include:

In case you missed it, Amazon Web Services (e.g. AWS) announced EKS (Elastic Kubernetes Service) which as its name implies, is an easy to use and manage Kubernetes (containers, serverless data infrastructure) running on AWS. AWS joins others including Microsoft Azure Kubernetes Services (AKS), Googles Kubernetes Engine, EasyStack (ESContainer for openstack and Kubernetes),VMware Pivotal Container Service (PKS) among others. What this means is that in the container serverless data infrastructure ecosystem Kubernetes container management (orchestration platform) is gaining in both industry as well as customer adoption along with deployment.

Check out other industry news, comments, trends perspectives here.

Data Infrastructure Server StorageIO Comments Content

Server StorageIO Commentary in the news, tips and articles

Recent Server StorageIO industry trends perspectives commentary in the news.

Via BizTech: Why Hybrid (SSD and HDD) Storage Might Be Fit for SMB environments
Via Excelero: Server StorageIO white paper enabling database DBaaS productivity
Via Cloudian: YouTube video interview file services on object storage with HyperFile
Via CDW Solutions: Comments on Software Defined Access
Via SearchStorage: Comments on Cloudian HyperStore on demand cloud like pricing
Via EnterpriseStorageForum: Comments and tips on Software Defined Storage Best Practices
Via PRNewsWire: Comments on Excelero NVMe NVMesh Database and DBaaS solutions
Via SearchStorage: Comments on NooBaa multi-cloud storage management
Via CDW: Comments on New IT Strategies Improve Your Bottom Line 
Via EnterpriseStorageForum: Comments on Software Defined Storage: Pros and Cons
Via DataCenterKnowledge: Comments on The Great Data Center Headache IoT
Via SearchStorage: Comments on Dell and VMware merger scenario options
Via PRNewswire: Comments on Chelsio Microsoft Validation of iWARP/RDMA
Via SearchStorage: Comments on Server Storage Industry trends and Dell EMC
Via ChannelProSMB: Comments on Hybrid HDD and SSD storage solutions
Via ChannelProNetwork: Comments on What the Future Holds for HDDs
Via HealthcareITnews: Comments on MOUNTAINS OF MOBILE DATA
Via SearchStorage: Comments on Cloudian HyperStore 7 targets multi-cloud complexities
Via GlobeNewsWire: Comments on Cloudian HyperStore 7
Via GizModo: Comments on Intel Optane 800P NVMe M.2 SSD
Via DataCenterKnowledge: Comments on getting data centers ready for IoT
Via DataCenterKnowledge: Comments on Beyond the Hype: AI in the Data Center
Via DataCenterKnowledge: Comments on Data Center and Cloud Disaster Recovery
Via SearchStoragae: Comments on Cloudian HyperFile marries NAS and object storage
Via SearchStoragae: Comments on Top 10 Tips on Solid State Storage Adoption Strategy
Via SearchStoragae: Comments on 8 Top Tips for Beating the Big Data Deluge

View more Server, Storage and I/O trends and perspectives comments here.

Data Infrastructure Server StorageIOblog posts

Server StorageIOblog Data Infrastructure Posts

Recent and popular Server StorageIOblog posts include:

Application Data Value Characteristics Everything Is Not The Same
Application Data Availability 4 3 2 1 Data Protection
AWS Cloud Application Data Protection Webinar
Microsoft Windows Server 2019 Insiders Preview
Application Data Characteristics Types Everything Is Not The Same
Application Data Volume Velocity Variety Everything Is Not The Same
Application Data Access Lifecycle Patterns Everything Is Not The Same
Veeam GDPR preparedness experiences Webinar walking the talk
VMware continues cloud construction with March announcements
Benefits of Moving Hyper-V Disaster Recovery to the Cloud Webinar
World Backup Day 2018 Data Protection Readiness Reminder
Use Intel Optane NVMe U.2 SFF 8639 SSD drive in PCIe slot
Data Infrastructure Resource Links cloud data protection tradecraft trends
How to Achieve Flexible Data Protection Availability with All Flash Storage Solutions
November 2017 Server StorageIO Data Infrastructure Update Newsletter
IT transformation Serverless Life Beyond DevOps Podcast
Data Protection Diaries Fundamental Topics Tools Techniques Technologies Tips
HPE Announces AMD Powered Gen 10 ProLiant DL385 For Software Defined Workloads
AWS Announces New S3 Cloud Storage Security Encryption Features
Introducing Windows Subsystem for Linux WSL Overview #blogtober
Hot Popular New Trending Data Infrastructure Vendors To Watch

View other recent as well as past StorageIOblog posts here

Server StorageIO Recommended Reading (Watching and Listening) List

Software-Defined Data Infrastructure Essentials SDDI SDDC

In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017) available at Amazon.com (check out special sale price), the following are Server StorageIO data infrastructure recommended reading, watching and listening list items. The Server StorageIO data infrastructure recommended reading list includes various IT, Data Infrastructure and related topics including Intel Recommended Reading List (IRRL) for developers is a good resource to check out. Speaking of my books, Didier Van Hoye (@WorkingHardInIt) has a good review over on his site you can view here, also check out the rest of his great content while there.

In case you may have missed it, here is a good presentation from AWS re:invent 2017 by Brendan Gregg (@brendangregg) about how Netflix does EC2 and other AWS tuning along with plenty of great resource links. Keith Tenzer (@keithtenzer) provides a good perspective piece about containers in a large IT enterprise environment here including various options.

Speaking of IT data centers and data infrastructure environments, checkout the list of some of the worlds most extreme habitats for technology here. Mark Betz (@markbetz) has a series of Docker and Kubernetes networking fundamentals posts on his site here, as well as over at Medium including mention of Google Cloud (@googlecloud). The posts in Marks series are good refresher or intros to how Docker and Kubernetes handles basic networking between containers, pods, nodes, hosts in clusters. Check out part I here and part II here.

Blockchain elements
Image via https://stevetodd.typepad.com

Steve Todd (@Stevetodd) has some good perspectives about Trusted Data Exchanges e.g. life beyond blockchain and bitcoin here along with core element considerations (beyond the product pitch) here, along with associated data infrastructure and storage evolution vs. revolution here.

Watch for more items to be added to the recommended reading list book shelf soon.

Data Infrastructure Server StorageIO event activities

Events and Activities

Recent and upcoming event activities.

March 27, 2018 – Webinar – Veeams Road to GDPR Compliancy The 5 Lessons Learned

Feb 28, 2018 – Webinar – Benefits of Moving Hyper-V Disaster Recovery to the Cloud

Jan 30, 2018 – Webinar – Achieve Flexible Data Protection and Availability with All Flash Storage

Nov. 9, 2017 – Webinar – All You Need To Know about ROBO Data Protection Backup

See more webinars and activities on the Server StorageIO Events page here.

Data Infrastructure Server StorageIO Industry Resources and Links

Various useful links and resources:

Data Infrastructure Recommend Reading and watching list
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/downloads – Various presentations and other download material
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

Connect and Converse With Us

Storage IO RSS storageio linkedin storageio facebook    Google+   storageio youtube  storageio instagram

Subscribe to Newsletter – Newsletter Archives StorageIO.comStorageIOblog.com

What this all means and wrap-up

Data Infrastructures are what exists inside physical data centers spanning cloud, converged, hyper-converged, virtual, serverless and other software defined as well as legacy environments. The fundamental role of data infrastructures comprising server (compute), storage, I/O networking hardware, software, services defined by management tools, best practices and policies is to provide a platform for applications along with their data to deliver information services. With March 31 being world backup day, also focus on making sure that on April 1st you are not a fool trying to recover from a bad data protection copy. With the continued movement to flash SSD along with other forms of storage class memory (SCM) and persistent memories (PM), data moves at a faster rate meaning data protection is even more important to get you out of trouble as fast as you get into issues.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Value Characteristics Everything Is Not The Same (Part I)

Application Data Value Characteristics Everything Is Not The Same

Application Data Value Characteristics Everything Is Not The Same

Application Data Value Characteristics Everything Is Not The Same

This is part one of a five-part mini-series looking at Application Data Value Characteristics Everything Is Not The Same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we start things off by looking at general application server storage I/O characteristics that have an impact on data value as well as access.

Application Data Value Software Defined Data Infrastructure Essentials Book SDDC

Everything is not the same across different organizations including Information Technology (IT) data centers, data infrastructures along with the applications as well as data they support. For example, there is so-called big data that can be many small files, objects, blobs or data and bit streams representing telemetry, click stream analytics, logs among other information.

Keep in mind that applications impact how data is accessed, used, processed, moved and stored. What this means is that a focus on data value, access patterns, along with other related topics need to also consider application performance, availability, capacity, economic (PACE) attributes.

If everything is not the same, why is so much data along with many applications treated the same from a PACE perspective?

Data Infrastructure resources including servers, storage, networks might be cheap or inexpensive, however, there is a cost to managing them along with data.

Managing includes data protection (backup, restore, BC, DR, HA, security) along with other activities. Likewise, there is a cost to the software along with cloud services among others. By understanding how applications use and interact with data, smarter, more informed data management decisions can be made.

IT Applications and Data Infrastructure Layers
IT Applications and Data Infrastructure Layers

Keep in mind that everything is not the same across various organizations, data centers, data infrastructures, data and the applications that use them. Also keep in mind that programs (e.g. applications) = algorithms (code) + data structures (how data defined and organized, structured or unstructured).

There are traditional applications, along with those tied to Internet of Things (IoT), Artificial Intelligence (AI) and Machine Learning (ML), Big Data and other analytics including real-time click stream, media and entertainment, security and surveillance, log and telemetry processing among many others.

What this means is that there are many different application with various character attributes along with resource (server compute, I/O network and memory, storage requirements) along with service requirements.

Common Applications Characteristics

Different applications will have various attributes, in general, as well as how they are used, for example, database transaction activity vs. reporting or analytics, logs and journals vs. redo logs, indices, tables, indices, import/export, scratch and temp space. Performance, availability, capacity, and economics (PACE) describes the applications and data characters and needs shown in the following figure.

Application and data PACE attributes
Application PACE attributes (via Software Defined Data Infrastructure Essentials)

All applications have PACE attributes, however:

  • PACE attributes vary by application and usage
  • Some applications and their data are more active than others
  • PACE characteristics may vary within different parts of an application

Think of applications along with associated data PACE as its personality or how it behaves, what it does, how it does it, and when, along with value, benefit, or cost as well as quality-of-service (QoS) attributes.

Understanding applications in different environments, including data values and associated PACE attributes, is essential for making informed server, storage, I/O decisions and data infrastructure decisions. Data infrastructures decisions range from configuration to acquisitions or upgrades, when, where, why, and how to protect, and how to optimize performance including capacity planning, reporting, and troubleshooting, not to mention addressing budget concerns.

Primary PACE attributes for active and inactive applications and data are:

P – Performance and activity (how things get used)
A – Availability and durability (resiliency and data protection)
C – Capacity and space (what things use or occupy)
E – Economics and Energy (people, budgets, and other barriers)

Some applications need more performance (server computer, or storage and network I/O), while others need space capacity (storage, memory, network, or I/O connectivity). Likewise, some applications have different availability needs (data protection, durability, security, resiliency, backup, business continuity, disaster recovery) that determine the tools, technologies, and techniques to use.

Budgets are also nearly always a concern, which for some applications means enabling more performance per cost while others are focused on maximizing space capacity and protection level per cost. PACE attributes also define or influence policies for QoS (performance, availability, capacity), as well as thresholds, limits, quotas, retention, and disposition, among others.

Performance and Activity (How Resources Get Used)

Some applications or components that comprise a larger solution will have more performance demands than others. Likewise, the performance characteristics of applications along with their associated data will also vary. Performance applies to the server, storage, and I/O networking hardware along with associated software and applications.

For servers, performance is focused on how much CPU or processor time is used, along with memory and I/O operations. I/O operations to create, read, update, or delete (CRUD) data include activity rate (frequency or data velocity) of I/O operations (IOPS). Other considerations include the volume or amount of data being moved (bandwidth, throughput, transfer), response time or latency, along with queue depths.

Activity is the amount of work to do or being done in a given amount of time (seconds, minutes, hours, days, weeks), which can be transactions, rates, IOPs. Additional performance considerations include latency, bandwidth, throughput, response time, queues, reads or writes, gets or puts, updates, lists, directories, searches, pages views, files opened, videos viewed, or downloads.
 
Server, storage, and I/O network performance include:

  • Processor CPU usage time and queues (user and system overhead)
  • Memory usage effectiveness including page and swap
  • I/O activity including between servers and storage
  • Errors, retransmission, retries, and rebuilds

the following figure shows a generic performance example of data being accessed (mixed reads, writes, random, sequential, big, small, low and high-latency) on a local and a remote basis. The example shows how for a given time interval (see lower right), applications are accessing and working with data via different data streams in the larger image left center. Also shown are queues and I/O handling along with end-to-end (E2E) response time.

fundamental server storage I/O
Server I/O performance fundamentals (via Software Defined Data Infrastructure Essentials)

Click here to view a larger version of the above figure.

Also shown on the left in the above figure is an example of E2E response time from the application through the various data infrastructure layers, as well as, lower center, the response time from the server to the memory or storage devices.

Various queues are shown in the middle of the above figure which are indicators of how much work is occurring, if the processing is keeping up with the work or causing backlogs. Context is needed for queues, as they exist in the server, I/O networking devices, and software drivers, as well as in storage among other locations.

Some basic server, storage, I/O metrics that matter include:

  • Queue depth of I/Os waiting to be processed and concurrency
  • CPU and memory usage to process I/Os
  • I/O size, or how much data can be moved in a given operation
  • I/O activity rate or IOPs = amount of data moved/I/O size per unit of time
  • Bandwidth = data moved per unit of time = I/O size × I/O rate
  • Latency usually increases with larger I/O sizes, decreases with smaller requests
  • I/O rates usually increase with smaller I/O sizes and vice versa
  • Bandwidth increases with larger I/O sizes and vice versa
  • Sequential stream access data may have better performance than some random access data
  • Not all data is conducive to being sequential stream, or random
  • Lower response time is better, higher activity rates and bandwidth are better

Queues with high latency and small I/O size or small I/O rates could indicate a performance bottleneck. Queues with low latency and high I/O rates with good bandwidth or data being moved could be a good thing. An important note is to look at several metrics, not just IOPs or activity, or bandwidth, queues, or response time. Also, keep in mind that metrics that matter for your environment may be different from those for somebody else.

Something to keep in perspective is that there can be a large amount of data with low performance, or a small amount of data with high-performance, not to mention many other variations. The important concept is that as space capacity scales, that does not mean performance also improves or vice versa, after all, everything is not the same.

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Keep in mind that with Application Data Value Characteristics Everything Is Not The Same across various organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. However all applications have some element (high or low) of performance, availability, capacity, economic (PACE) along with various similarities. Likewise data has different value at various times. Continue reading the next post (Part II Application Data Availability Everything Is Not The Same) in this five-part mini-series here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.