Dell Technology World 2018 Announcement Summary

Dell Technology World 2018 Announcement Summary

Dell Technology World 2018 Announcement Summary
This is part one of a five-part series about Dell Technology World 2018 announcement summary. Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw). There were several announcements along with plenty of other activity from sessions, meetings, hallway and event networking taking place at Dell Technology World DTW 2018.

Major data infrastructure technology announcements include:

  • PowerMax all-flash array (AFA) solid state device (SSD) NVMe storage system
  • PowerEdge four-socket 2U and 4U rack servers
  • XtremIO X2 AFA SSD storage system updates
  • PowerEdge MX preview of future composable servers
  • Desktop and thin client along with other VDI updates
  • Cloud and networking enhancements

Besides the above, additional data infrastructure related announcements were made in association with Dell Technology family members including VMware along with other partners, as well as customer awards. Other updates and announcements were tied to business updates from Dell Technology, Dell Technical Capital (venture capital), and, Dell Financial Services.

Dell Technology World Buzzword Bingo Lineup

Some of the buzzword bingo terms, topics, acronyms from Dell Technology World 2018 included AFA, AI, Autonomous, Azure, Bare Metal, Big Data, Blockchain, CI, Cloud, Composable, Compression, Containers, Core, Data Analytics, Dedupe, Dell, DFS (Dell Financial Services), DFR (Data Footprint Reduction), Distributed Ledger, DL, Durability, Fabric, FPGA, GDPR, Gen-Z, GPU, HCI, HDD, HPC, Hybrid, IOP, Kubernetes, Latency, MaaS (Metal as a Service), ML, NFV, NSX, NVMe, NVMeoF, PACE (Performance Availability Capacity Economics), PCIe, Pivotal, PMEM, RAID, RPO, RTO, SAS, SATA, SC, SCM, SDDC, SDS, Socket, SSD, Stamp, TBW (Terabytes Written per day), VDI, venture capital, VMware and VR among others.

Dell Technology World 2018 Venue
Dell Technology World DTW 2018 Event and Venue

Dell Technology World 2018 was located at the combined Palazzo and Venetian hotels along with adjacent Sands Expo center kicking off Monday, April 30th and wrapping up May 4th.

The theme for Dell Technology World DTW 2018 was make it real, which in some ways was interesting given the focus on virtual including virtual reality (VR), software-defined data center (SDDC) virtualization, data infrastructure topics, along with artificial intelligence (AI).

Virtual Sky Dell Technology World 2018
Make it real – Venetian Palazzo St. Mark’s Square on the way to Sands Expo Center

There was plenty of AI, VR, SDDC along with other technologies, tools as well as some fun stuff to do including VR games.

Dell Technology World 2018 Commons Area
Dell Technology World Village Area near Key Note and Expo Halls

Dell Technology World 2018 Commons Area Drones
Dell Technology World Drone Flying Area

During a break from some meetings, I used a few minutes to fly a drone using VR which was interesting. I Have been operating drones (See some videos here) visually without dependence on first-person view (FPV) or relying on extensive autonomous operations instead flying heads up by hand for several years. Needless to say, the VR was interesting, granted encountered a bit of vertigo that I had to get used to.

Dell Technology World 2018 Commons Area Virtual Village
More views of the Dell Technology World Village and Commons Area with VR activity

Dell Technology World 2018 Commons Area Virtual Village
Dell Technology World Village and VR area

Dell Technology World 2018 Commons Area Virtual Village
Dell Technology World Bean Bag Area

Dell Technology World 2018 Announcement Summary

Ok, nuff with the AI, ML, DL, VR fun, time to move on to the business and technology topics of Dell Technologies World 2018.

What was announced at Dell Technology World 2018 included among others:

Dell Technology World 2018 PowerMax
Dell PowerMax Front View

Subsequent posts in this series take a deeper look at the various announcements as well as what they mean.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

On the surface it may appear that there was not much announced at Dell Technology World 2018 particular compared to some of the recent Dell EMC Worlds and EMC Worlds. However turns out that there was a lot announced, granted without some of the entertainment and circus like atmosphere of previous events. Continue reading here Part II Dell Technology World 2018 Modern Data Center Announcement Details in this series, along with Part III here, Part IV here (including PowerEdge MX composable infrastructure leveraging Gen-Z) and Part V (servers and converged) here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part II Dell Technology World 2018 Modern Data Center Announcement Details

Part II Dell Technology World 2018 Modern Data Center Announcement Details

Dell Technology World 2018 Modern Data Center Announcement Summary
This is Part II Dell Technology World 2018 Modern Data Center Announcement Details that is part of a five-post series (view part I here, part III here, part IV here and part V here). Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw).

Dell Technology World 2018 Venue
Dell Technology World DTW 2018 Event and Venue

What was announced at Dell Technology World 2018 included among others:

Dell Technology World 2018 PowerMax
Dell PowerMax Front View

Dell Technology World 2018 Modern Data Center Announcement Details

Dell Technologies data infrastructure related announcements included new solutions competencies and expanded services deployment competencies with partners to boost deal size and revenues. An Internet of Things (IoT) solution competency was added with others planned including High-Performance Computing (HPC) / Super Computing (SC), Data Analytics, Business Applications and Security related topics. Dell Financial Services flexible consumption models announced at Dell EMC World 2017 provide flexible financing options for both partners as well as their clients.

Flexible Dell Financial Services cloud-like consumption model (e.g., pay for what you use) enhancements include reduced entry points for the Flex on Demand solutions across the Dell EMC storage portfolio. For example, Flex on Demand velocity pricing models for Dell EMC Unity All-Flash Array (AFA) solid state device (SSD) storage solution, and XtremIO X2 AFA systems with price points of less than USD 1,000.00 per month. The benefit is that Dell partners have a financial vehicle to help their midrange customers run consumption-based financing for all-flash storage without custom configurations resulting in faster deployment opportunities.

In other partner updates, Dell Technologies is enhancing the incentive program Dell EMC MyRewards program to help drive new business. Dell EMC MyRewards Program is an opt-in, points-based reward program for solution provider sales reps and systems engineers. MyRewards program is slated to replace the existing Partner Advantage and Sell & Earn programs with bigger and better promotions (up to 3x bonus payout, simplified global claiming).

What this means for partners is the ability to earn more while offering their clients new solutions with flexible financing and consumption-based pricing among other options. Other partner enhancements include update demo program, Proof of Concept (POC) program, and IT transformation campaigns.

Powering up the Modern Data Center and Future of Work

Powering up the modern data center along with future of work, part of the make it real theme of Dell Technologies world 2018 includes data infrastructure server, storage, I/O networking hardware, software and service solutions. These data infrastructure solutions include NVMe based storage, Converged Infrastructure (CI), hyper-converged infrastructure (HCI), software-defined data center (SDDC), VMware based multi-clouds, along with modular infrastructure resources.

In addition to server and storage data infrastructure resources form desktop to data center, Dell also has a focus of enabling traditional as well as emerging Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) as well as analytics applications. Besides providing data infrastructure resources to support AI, ML, DL, IoT and other applications along with their workloads, Dell is leveraging AI technology in some of their products for example PowerMax.

Other Dell Technologies announcements include Virtustream cloud risk management and compliance, along with Epic and SAP Digital Health healthcare software solutions. In addition to Virtustream, Dell Technologies cloud-related announcements also include VMware NSX network Virtual Cloud Network with Microsoft Azure support along with security enhancements. Refer here to recent April VMware vSphere, vCenter, vSAN, vRealize and other Virtual announcements as well as here for March VMware cloud updates.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

The above set of announcements span business to technology along with partner activity. Continue reading here (Part III Dell Technology World 2018 Storage Announcement Details) of this series, and part I (general summary) here, along with Part IV (PowerEdge MX Composable) here and part V here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part III Dell Technology World 2018 Storage Announcement Details

Part III Dell Technology World 2018 Storage Announcement Details

Part III Dell Technology World 2018 Storage Announcement Details

This is Part III Dell Technology World 2018 Storage Announcement Details that is part of a five-post series (view part I here, part II here, part IV (PowerEdge MX Composable) here and part V here). Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw).

Dell Technology World 2018 Storage Announcements Include:

  • PowerMax – Enterprise class tier 0 and tier 1 all-flash array (AFA)
  • XtremIO X2 – Native replication and new entry-level pricing

Dell Technology World 2018 PowerMax back view
Back view of Dell PowerMax

Dell PowerMax Something Old, Something New, Something Fast Near You Soon

PowerMax is the new companion to VMAX. Positioned for traditional tier 0 and tier 1 enterprise-class applications and workloads, PowerMax is optimized for dense server virtualization and SDDC, SAP, Oracle, SQL Server along with other low-latency, high-performance database activity. Different target workloads include Mainframe as well as Open Systems, AI, ML, DL, Big Data, as well as consolidation.

The Dell PowerMax is an all-flash array (AFA) architecture with an end to end NVMe along with built-in AI and ML technology. Building on the architecture of Dell EMC VMAX (some models still available) with new faster processors, full end to end NVMe ready (e.g., front-end server attachment, back-end devices).

The AI and ML features of PowerMax PowerMaxOS include an engine (software) that learns and makes autonomous storage management decisions, as well as implementations including tiering. Other AI and ML enabled operations include performance optimizations based on I/O pattern recognition.

Other features of PowerMax besides increased speeds, feeds, performance includes data footprint reduction (DFR) inline deduplication along with enhanced compression. The DFR benefits include up to 5:1 data reduction for space efficiency, without performance impact to boost performance effectiveness. The DFR along with improved 2x rack density, along with up to 40% power savings (your results may vary) based on Dell claims to enable an impressive amount of performance, availability, capacity, economics (e.g., PACE) in a given number of cubic feet (or meters).

There are two PowerMax models including 2000 (scales from 1 to 2 redundant controllers) and 8000 (scales from 1 to 8 redundant controller nodes). Note that controller nodes are Intel Xeon multi-socket, multi-core processors enabling scale-up and scale-out performance, availability, and capacity. Competitors of the PowerMax include AFA solutions from HPE 3PAR, NetApp, and Pure Storage among others.

Dell Technology World 2018 PowerMax Front View
Front view of Dell PowerMax

Besides resiliency, data services along with data protection, Dell is claiming PowerMax is 2x faster than their nearest high-end storage system competitors with up to 150GB/sec (e.g., 1,200Gbps) of bandwidth, as well as up to 10 million IOPS with 50% lower latency compared to previous VMAX.

PowerMax is also a full end to end NVMe ready (both back-end and front-end). Back-end includes NVMe drives, devices, shelves, and enclosures) as well as front-end (future NVMe over Fabrics, e.g., NVMeoF). Being NVMeoF ready enables PowerMax to support future front-end server network connectivity options to traditional SAN Fibre Channel (FC), iSCSI among others.

PowerMax is also ready for new, emerging high speed, low-latency storage class memory (SCM).  SCM is the next generation of persistent memories (PMEM) having performance closer to traditional DRAM while persistence of flash SSD. Examples of SCM technologies entering the market include Intel Optane based on 3D XPoint, along with others such as those from Everspin among others.

IBM Z Zed Mainframe at Dell Technology World 2018
An IBM “Zed” Mainframe (in case you have never seen one)

Based on the performance claims, the Dell PowerMax has an interesting if not potentially industry leading power, performance, availability, capacity, economic footprint per cubic foot (or meter). It will be interesting to see some third-party validation or audits of Dell claims. Likewise, I look forward to seeing some real-world applied workloads of Dell PowerMax vs. other storage systems. Here are some additional perspectives Via SearchStorage: Dell EMC all-flash PowerMax replaces VMAX, injects NVMe


Dell PowerMax Visual Studio (Image via Dell.com)

To help with customer decision making, Dell has created an interactive VMAX and PowerMax configuration studio that you can use to try out as well as learn about different options here. View more Dell PowerMax speeds, feeds, slots, watts, features and functions here (PDF).

Dell Technology World 2018 XtremIO X2

XtremIO X2

Dell XtremIO X2 and XIOS 6.1 operating system (software-defined storage) enhanced with native replication across wide area networks (WAN). The new WAN replication is metadata-aware native to the XtremIO X2 that implements data footprint reduction (DFR) technology reducing the amount of data sent over network connections. The benefit is more data moved in a given amount of time along with better data protection requiring less time (and network) by only moving unique changed data.

Dell Technology World 2018 XtremIO X2 back view
Back View of XtremIO X2

Dell EMC claims to reduce WAN network bandwidth by up to 75% utilizing the new native XtremIO X2 native asynchronous replication. Also, Dell says XtremIO X2 requires up to 38% less storage space at disaster recovery and business resiliency locations while maintaining predictable recovery point objectives (RPO) of 30 seconds. Another XtremIO X2 announcement is a new entry model for customers at up to 55% lower cost than previous product generations. View more information about Dell XtremIO X2 here, along with speeds feeds here, here, as well as here.

What about Dell Midrange Storage Unity and SC?

Here are some perspectives Via SearchStorage: Dell EMC midrange storage keeps its overlapping arrays.

Dell Bulk and Elastic Cloud Storage (ECS)

One of the questions I had going into Dell Technology World 2018 was what is the status of ECS (and its predecessors Atmos as well as Centera) bulk object storage is given lack of messaging and news around it. Specifically, my concern was that if ECS is the platform for storing and managing data to be preserved for the future, what is the current status, state as well as future of ECS.

In conversations with the Dell ECS folks, ECS which has encompassed Centera functionality and it (ECS) is very much alive, stay tuned for more updates. Also, note that Centera has been EOL. However, its feature functionality has been absorbed by ECS meaning that data preserved can now be managed by ECS. While I can not divulge the details of some meeting discussions, I can say that I am comfortable (for now) with the future directions of ECS along with the data it manages, stay tuned for updates.

Dell Data Protection

What about Data Protection? Security was mentioned in several different contexts during Dell Technology World 2018, as was a strong physical security presence seen at the Palazzo and Sands venues. Likewise, there was a data protection presence at Dell Technologies World 2018 in the expo hall, as well as with various sessions.

What was heard was mainly around data protection management tools, hybrid, as well as data protection appliances and data domain-based solutions. Perhaps we will hear more from Dell Technologies World in the future about data protection related topics.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

If there was any doubt about would Dell keep EMC storage progressing forward, the above announcements help to show some examples of what they are doing. On the other hand, lets stay tuned to see what news and updates appear in the future pertaining to mid-range storage (e.g. Unity and SC) as well as Isilon, ScaleIO, Data Protection platforms as well as software among other technologies.

Continue reading part IV (PowerEdge MX Composable and Gen-Z) here in this series, as well as part I here, part II here, part IV (PowerEdge MX Composable) here, and, part V here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure

Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure

Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure
This is Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure that is part of a five-post series (view part I here, part II here, part III here and part V here). Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw).

Introducing PowerEdge MX Composable Infrastructure (the other CI)

Dell announced at Dell Technology World 2018 a preview of the new PowerEdge MX (kinetic) family of data infrastructure resource servers. PowerEdge MX is being developed to meet the needs of resource-centric data infrastructures that require scalability, as well as performance availability, capacity, economic (PACE) flexibility for diverse workloads. Read more about Dell PowerEdge MX, Gen-Z and composable infrastructures (the other CI) here.

Some of the workloads being targeted by PowerEdge MX include large-scale dense SDDC virtualization (and containers), private (or public clouds by service providers). Other workloads include AI, ML, DL, data analytics, HPC, SC, big data, in-memory database, software-defined storage (SDS), software-defined networking (SDN), network function virtualization (NFV) among others.

The new PowerEdge MX previewed will be announced later in 2018 featuring a flexible, decomposable, as well as composable architecture that enables resources to be disaggregated and reassigned or aggregated to meet particular needs (e.g., defined or composed). Instead of traditional software defined virtualization carving up servers in smaller virtual machines or containers to meet workload needs, PowerEdge MX is part of a next-generation approach to enable server resources to be leveraged at a finer granularity.

For example, today an entire server including all of its sockets, cores, memory, PCIe devices among other resources get allocated and defined for use. A server gets defined for use by an operating system when bare metal (or Metal as a Service) or a hypervisor. PowerEdge MX (and other platforms expected to enter the market) have a finer granularity where with a proper upper layer (or higher altitude) software resources can be allocated and defined to meet different needs.

What this means is the potential to allocate resources to a given server with more granularity and flexibility, as well as combine multiple server’s resources to create what appears to be a more massive server. There are vendors in the market who have been working on and enabling this type of approach for several years ranging from ScaleMP to startup Liqid and Tidal among others. However, at the heart of the Dell PowerEdge MX is the new emerging Gen-Z technology.

If you are not familiar with Gen-Z, add it to your buzzword bingo lineup and learn about it as it is coming your way. A brief overview of Gen-Z consortium and Gen-Z material and primer information here. A common question is if Gen-Z is a replacement for PCIe which for now is that they will coexist and complement each other. Another common question is if Gen-Z will replace Ethernet and InfiniBand and the answer is for now they complement each other. Another question is if Gen-Z will replace Intel Quick Path and another CPU device and memory interconnects and the answer is potentially, and in my opinion, watch to see how long Intel drags its feet.

Note that composability is another way of saying defined without saying defined, something to pay attention too as well as have some vendor fun with. Also, note that Dell is referent to PowerEdge MX and Kinetic architecture which is not the same as the Seagate Kinetic Ethernet-based object key value accessed drive initiative from a few years ago (learn more about Seagate Kinetic here). Learn more about Gen-Z and what Dell is doing here.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Dell has provided a glimpse of what they are working on pertaining composable infrastructure, the other CI, as well as Gen-Z and related next generation of servers with PowerEdge MX as well as Kinetic. Stay tuned for more about Gen-Z and composable infrastructures. Continue reading Part V (servers converged) in this series here, as well as part I here, part II here and part III here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Windows Server 2019 Insiders Preview

Microsoft Windows Server 2019 Insiders Preview

Application Data Value Characteristics Everything Is Not The Same

Microsoft Windows Server 2019 Insiders Preview has been announced. Windows Server 2019 in the past might have been named 2016 R2 also known as a Long-Term Servicing Channel (LTSC) release. Microsoft recommends LTSC Windows Server for workloads such as Microsoft SQL Server, Share Point and SDDC. The focus of Microsoft Windows Server 2019 Insiders Preview is around hybrid cloud, security, application development as well as deployment including containers, software defined data center (SDDC) and software defined data infrastructure, as well as converged along with hyper-converged infrasture (HCI) management.

Windows Server 2019 Preview Features

Features and enhancements in the Microsoft Windows Server 2019 Insiders Preview span HCI management, security, hybrid cloud among others.

  • Hybrid cloud – Extending active directory, file server synchronize, cloud backup, applications spanning on-premises and cloud, management).
  • Security – Protect, detect and respond including shielded VMs, attested guarded fabric of host guarded machines, Windows and Linux VM (shielded), VMConnect for Windows and Linux troubleshooting of Shielded VM and encrypted networks, Windows Defender Advanced Threat Protection (ATP) among other enhancements.
  • Application platform – Developer and deployment tools for Windows Server containers and Windows Subsystem on Linux (WSL). Note that Microsoft has also been reducing the size of the Server image while extending feature functionality. The smaller images take up less storage space, plus load faster. As part of continued serverless and container support (Windows and Linux along with Docker), there are options for deployment orchestration including Kubernetes (in beta). Other enhancements include extending previous support for Windows Subsystem for Linux (WSL).

Other enhancements part of Microsoft Windows Server 2019 Insiders Preview include cluster sets in support of software defined data center (SDDC). Cluster sets expand SDDC clusters of loosely coupled grouping of multiple failover clusters including compute, storage as well as hyper-converged configurations. Virtual machines have fluidity across member clusters within a cluster set and unified storage namespace. Existing failover cluster management experiences is preserved for member clusters, along with a new cluster set instance of the aggregate resources.

Management enhancements include S2D software defined storage performance history, project Honolulu support for storage updates, along with powershell cmdlet updates, as well as system center 2019. Learn more about project Honolulu hybrid management here and here.

Microsoft and Windows LTSC and SAC

As a refresher, Microsoft Windows (along with other software) is now being released on two paths including more frequent semi-annual channel (SAC), and less frequent LTSC releases. Some other things to keep in mind that SAC are focused around server core and nano server as container image while LTSC includes server with desktop experience as well as server core. For example, Windows Server 2016 released fall of 2016 is an LTSC, while the 1709 release was a SAC which had specific enhancements for container related environments.

There was some confusion fall of 2017 when 1709 was released as it was optimized for container and serverless environments and thus lacked storage spaces direct (S2D) leading some to speculate S2D was dead. S2D among other items that were not in the 1709 SAC are very much alive and enhanced in the LTSC preview for Windows Server 2019. Learn more about Microsoft LTSC and SAC here.

Test Driving Installing The Bits

One of the enhancements with LTSC preview candidate server 2019 is improved upgrades of existing environments. Granted not everybody will choose the upgrade in place keeping existing files however some may find the capability useful. I chose to give the upgrade keeping current files in place as an option to see how it worked. To do the upgrade I used a clean and up to date Windows Server 2016 data center edition with desktop. This test system is a VMware ESXi 6.5 guest running on flash SSD storage. Before the upgrade to Windows Server 2019, I made a VMware vSphere snapshot so I could quickly and easily restore the system to a good state should something not work.

To get the bits, go to Windows Insiders Preview Downloads (you will need to register)

Windows Server 2019 LTSC build 17623 is available in 18 languages in an ISO format and require a key.

The keys for the pre-release unlimited activations are:
Datacenter Edition         6XBNX-4JQGW-QX6QG-74P76-72V67
Standard Edition             MFY9F-XBN2F-TYFMP-CCV49-RMYVH

First step is downloading the bits from the Windows insiders preview page including select language for the image to use.

Getting the windows server 2019 preview bits
Select the language for the image to download

windows server 2019 select language

Starting the download

Once you have the image download, apply it to your bare metal server or hypervisors guest. In this example, I copied the windows server 2019 image to a VMware ESXi server for a Windows Server 2016 guest machine to access via its virtual CD/DVD.

pre upgrade check windows server version
Verify the Windows Server version before upgrade

After download, access the image, in this case, I attached the image to the virtual machine CD, then accessed it and ran the setup application.

Microsoft Windows Server 2019 Insiders Preview download

Download updates now or later

license key

Entering license key for pre-release windows server 2019

Microsoft Windows Server 2019 Insiders Preview datacenter desktop version

Selecting Windows Server Datacenter with Desktop

Microsoft Windows Server 2019 Insiders Preview license

Accepting Software License for pre-release version.

Next up is determining to do a new install (keep nothing), or an in-place upgrade. I wanted to see how smooth the in-place upgrade was so selected that option.

Microsoft Windows Server 2019 Insiders Preview inplace upgrade

What to keep, nothing, or existing files and data


Confirming your selections

Microsoft Windows Server 2019 Insiders Preview install start

Ready to start the installation process

Microsoft Windows Server 2019 Insiders Preview upgrade in progress
Installation underway of Windows Server 2019 preview

Once the installation is complete, verify that Windows Server 2019 is now installed.

Microsoft Windows Server 2019 Insiders Preview upgrade completed
Completed upgrade from Windows Server 2016 to Microsoft Windows Server 2019 Insiders Preview

The above shows verifying the system build using Powershell, as well as the message in the lower right corner of the display. Granted the above does not show the new functionality, however you should get an idea of how quickly a Windows Server 2019 preview can be deployed to explore and try out the new features.

Where to learn more

Learn more Microsoft Windows Server 2019 Insiders Preview, Windows Server Storage Spaces Direct (S2D), Azure and related software defined data center (SDDC), software defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Microsoft Windows Server 2019 Insiders Preview gives a glimpse of some of the new features that are part of the next evolution of Windows Server as part of supporting hybrid IT environments. In addition to the new features and functionality that convey not only support for hybrid cloud, also hybrid applications development, deployment, devops and workloads, Microsoft is showing flexibility in management, ease of use, scalability, along with security as well as scale out stability. If you have not looked at Windows Server for a while, or involved with serverless, containers, Kubernetes among other initiatives, now is a good time to check out Microsoft Windows Server 2019 Insiders Preview.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Value Characteristics Everything Is Not The Same (Part I)

Application Data Value Characteristics Everything Is Not The Same

Application Data Value Characteristics Everything Is Not The Same

Application Data Value Characteristics Everything Is Not The Same

This is part one of a five-part mini-series looking at Application Data Value Characteristics Everything Is Not The Same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we start things off by looking at general application server storage I/O characteristics that have an impact on data value as well as access.

Application Data Value Software Defined Data Infrastructure Essentials Book SDDC

Everything is not the same across different organizations including Information Technology (IT) data centers, data infrastructures along with the applications as well as data they support. For example, there is so-called big data that can be many small files, objects, blobs or data and bit streams representing telemetry, click stream analytics, logs among other information.

Keep in mind that applications impact how data is accessed, used, processed, moved and stored. What this means is that a focus on data value, access patterns, along with other related topics need to also consider application performance, availability, capacity, economic (PACE) attributes.

If everything is not the same, why is so much data along with many applications treated the same from a PACE perspective?

Data Infrastructure resources including servers, storage, networks might be cheap or inexpensive, however, there is a cost to managing them along with data.

Managing includes data protection (backup, restore, BC, DR, HA, security) along with other activities. Likewise, there is a cost to the software along with cloud services among others. By understanding how applications use and interact with data, smarter, more informed data management decisions can be made.

IT Applications and Data Infrastructure Layers
IT Applications and Data Infrastructure Layers

Keep in mind that everything is not the same across various organizations, data centers, data infrastructures, data and the applications that use them. Also keep in mind that programs (e.g. applications) = algorithms (code) + data structures (how data defined and organized, structured or unstructured).

There are traditional applications, along with those tied to Internet of Things (IoT), Artificial Intelligence (AI) and Machine Learning (ML), Big Data and other analytics including real-time click stream, media and entertainment, security and surveillance, log and telemetry processing among many others.

What this means is that there are many different application with various character attributes along with resource (server compute, I/O network and memory, storage requirements) along with service requirements.

Common Applications Characteristics

Different applications will have various attributes, in general, as well as how they are used, for example, database transaction activity vs. reporting or analytics, logs and journals vs. redo logs, indices, tables, indices, import/export, scratch and temp space. Performance, availability, capacity, and economics (PACE) describes the applications and data characters and needs shown in the following figure.

Application and data PACE attributes
Application PACE attributes (via Software Defined Data Infrastructure Essentials)

All applications have PACE attributes, however:

  • PACE attributes vary by application and usage
  • Some applications and their data are more active than others
  • PACE characteristics may vary within different parts of an application

Think of applications along with associated data PACE as its personality or how it behaves, what it does, how it does it, and when, along with value, benefit, or cost as well as quality-of-service (QoS) attributes.

Understanding applications in different environments, including data values and associated PACE attributes, is essential for making informed server, storage, I/O decisions and data infrastructure decisions. Data infrastructures decisions range from configuration to acquisitions or upgrades, when, where, why, and how to protect, and how to optimize performance including capacity planning, reporting, and troubleshooting, not to mention addressing budget concerns.

Primary PACE attributes for active and inactive applications and data are:

P – Performance and activity (how things get used)
A – Availability and durability (resiliency and data protection)
C – Capacity and space (what things use or occupy)
E – Economics and Energy (people, budgets, and other barriers)

Some applications need more performance (server computer, or storage and network I/O), while others need space capacity (storage, memory, network, or I/O connectivity). Likewise, some applications have different availability needs (data protection, durability, security, resiliency, backup, business continuity, disaster recovery) that determine the tools, technologies, and techniques to use.

Budgets are also nearly always a concern, which for some applications means enabling more performance per cost while others are focused on maximizing space capacity and protection level per cost. PACE attributes also define or influence policies for QoS (performance, availability, capacity), as well as thresholds, limits, quotas, retention, and disposition, among others.

Performance and Activity (How Resources Get Used)

Some applications or components that comprise a larger solution will have more performance demands than others. Likewise, the performance characteristics of applications along with their associated data will also vary. Performance applies to the server, storage, and I/O networking hardware along with associated software and applications.

For servers, performance is focused on how much CPU or processor time is used, along with memory and I/O operations. I/O operations to create, read, update, or delete (CRUD) data include activity rate (frequency or data velocity) of I/O operations (IOPS). Other considerations include the volume or amount of data being moved (bandwidth, throughput, transfer), response time or latency, along with queue depths.

Activity is the amount of work to do or being done in a given amount of time (seconds, minutes, hours, days, weeks), which can be transactions, rates, IOPs. Additional performance considerations include latency, bandwidth, throughput, response time, queues, reads or writes, gets or puts, updates, lists, directories, searches, pages views, files opened, videos viewed, or downloads.
 
Server, storage, and I/O network performance include:

  • Processor CPU usage time and queues (user and system overhead)
  • Memory usage effectiveness including page and swap
  • I/O activity including between servers and storage
  • Errors, retransmission, retries, and rebuilds

the following figure shows a generic performance example of data being accessed (mixed reads, writes, random, sequential, big, small, low and high-latency) on a local and a remote basis. The example shows how for a given time interval (see lower right), applications are accessing and working with data via different data streams in the larger image left center. Also shown are queues and I/O handling along with end-to-end (E2E) response time.

fundamental server storage I/O
Server I/O performance fundamentals (via Software Defined Data Infrastructure Essentials)

Click here to view a larger version of the above figure.

Also shown on the left in the above figure is an example of E2E response time from the application through the various data infrastructure layers, as well as, lower center, the response time from the server to the memory or storage devices.

Various queues are shown in the middle of the above figure which are indicators of how much work is occurring, if the processing is keeping up with the work or causing backlogs. Context is needed for queues, as they exist in the server, I/O networking devices, and software drivers, as well as in storage among other locations.

Some basic server, storage, I/O metrics that matter include:

  • Queue depth of I/Os waiting to be processed and concurrency
  • CPU and memory usage to process I/Os
  • I/O size, or how much data can be moved in a given operation
  • I/O activity rate or IOPs = amount of data moved/I/O size per unit of time
  • Bandwidth = data moved per unit of time = I/O size × I/O rate
  • Latency usually increases with larger I/O sizes, decreases with smaller requests
  • I/O rates usually increase with smaller I/O sizes and vice versa
  • Bandwidth increases with larger I/O sizes and vice versa
  • Sequential stream access data may have better performance than some random access data
  • Not all data is conducive to being sequential stream, or random
  • Lower response time is better, higher activity rates and bandwidth are better

Queues with high latency and small I/O size or small I/O rates could indicate a performance bottleneck. Queues with low latency and high I/O rates with good bandwidth or data being moved could be a good thing. An important note is to look at several metrics, not just IOPs or activity, or bandwidth, queues, or response time. Also, keep in mind that metrics that matter for your environment may be different from those for somebody else.

Something to keep in perspective is that there can be a large amount of data with low performance, or a small amount of data with high-performance, not to mention many other variations. The important concept is that as space capacity scales, that does not mean performance also improves or vice versa, after all, everything is not the same.

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Keep in mind that with Application Data Value Characteristics Everything Is Not The Same across various organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. However all applications have some element (high or low) of performance, availability, capacity, economic (PACE) along with various similarities. Likewise data has different value at various times. Continue reading the next post (Part II Application Data Availability Everything Is Not The Same) in this five-part mini-series here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Availability 4 3 2 1 Data Protection

Application Data Availability 4 3 2 1 Data Protection

4 3 2 1 data protection Application Data Availability Everything Is Not The Same

Application Data Availability 4 3 2 1 Data Protection

This is part two of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we continue looking at application performance, availability, capacity, economic (PACE) attributes that have an impact on data value as well as availability.

4 3 2 1 data protection  Book SDDC

Availability (Accessibility, Durability, Consistency)

Just as there are many different aspects and focus areas for performance, there are also several facets to availability. Note that applications performance requires availability and availability relies on some level of performance.

Availability is a broad and encompassing area that includes data protection to protect, preserve, and serve (backup/restore, archive, BC, BR, DR, HA) data and applications. There are logical and physical aspects of availability including data protection as well as security including key management (manage your keys or authentication and certificates) and permissions, among other things.

Availability = accessibility (can you get to your application and data) + durability (is the data intact and consistent). This includes basic Reliability, Availability, Serviceability (RAS), as well as high availability, accessibility, and durability. “Durable” has multiple meanings, so context is important. Durable means how data infrastructure resources hold up to, survive, and tolerate wear and tear from use (i.e., endurance), for example, Flash SSD or mechanical devices such as Hard Disk Drives (HDDs). Another context for durable refers to data, meaning how many copies in various places.

Server, storage, and I/O network availability topics include:

  • Resiliency and self-healing to tolerate failure or disruption
  • Hardware, software, and services configured for resiliency
  • Accessibility to reach or be reached for handling work
  • Durability and consistency of data to be available for access
  • Protection of data, applications, and assets including security

Additional server I/O and data infrastructure along with storage topics include:

  • Backup/restore, replication, snapshots, sync, and copies
  • Basic Reliability, Availability, Serviceability, HA, fail over, BC, BR, and DR
  • Alternative paths, redundant components, and associated software
  • Applications that are fault-tolerant, resilient, and self-healing
  • Non disruptive upgrades, code (application or software) loads, and activation
  • Immediate data consistency and integrity vs. eventual consistency
  • Virus, malware, and other data corruption or loss prevention

From a data protection standpoint, the fundamental rule or guideline is 4 3 2 1, which means having at least four copies consisting of at least three versions (different points in time), at least two of which are on different systems or storage devices and at least one of those is off-site (on-line, off-line, cloud, or other). There are many variations of the 4 3 2 1 rule shown in the following figure along with approaches on how to manage technology to use. We will go into deeper this subject in later chapters. For now, remember the following.

large version application server storage I/O
4 3 2 1 data protection (via Software Defined Data Infrastructure Essentials)

4    At least four copies of data (or more), Enables durability in case a copy goes bad, deleted, corrupted, failed device, or site.
3    The number (or more) versions of the data to retain, Enables various recovery points in time to restore, resume, restart from.
2    Data located on two or more systems (devices or media/mediums), Enables protection against device, system, server, file system, or other fault/failure.

1    With at least one of those copies being off-premise and not live (isolated from active primary copy), Enables resiliency across sites, as well as space, time, distance gap for protection.

Capacity and Space (What Gets Consumed and Occupied)

In addition to being available and accessible in a timely manner (performance), data (and applications) occupy space. That space is memory in servers, as well as using available consumable processor CPU time along with I/O (performance) including over networks.

Data and applications also consume storage space where they are stored. In addition to basic data space, there is also space consumed for metadata as well as protection copies (and overhead), application settings, logs, and other items. Another aspect of capacity includes network IP ports and addresses, software licenses, server, storage, and network bandwidth or service time.

Server, storage, and I/O network capacity topics include:

  • Consumable time-expiring resources (processor time, I/O, network bandwidth)
  • Network IP and other addresses
  • Physical resources of servers, storage, and I/O networking devices
  • Software licenses based on consumption or number of users
  • Primary and protection copies of data and applications
  • Active and standby data infrastructure resources and sites
  • Data footprint reduction (DFR) tools and techniques for space optimization
  • Policies, quotas, thresholds, limits, and capacity QoS
  • Application and database optimization

DFR includes various techniques, technologies, and tools to reduce the impact or overhead of protecting, preserving, and serving more data for longer periods of time. There are many different approaches to implementing a DFR strategy, since there are various applications and data.

Common DFR techniques and technologies include archiving, backup modernization, copy data management (CDM), clean up, compress, and consolidate, data management, deletion and dedupe, storage tiering, RAID (including parity-based, erasure codes , local reconstruction codes [LRC] , and Reed-Solomon , Ceph Shingled Erasure Code (SHEC ), among others), along with protection configurations along with thin-provisioning, among others.

DFR can be implemented in various complementary locations from row-level compression in database or email to normalized databases, to file systems, operating systems, appliances, and storage systems using various techniques.

Also, keep in mind that not all data is the same; some is sparse, some is dense, some can be compressed or deduped while others cannot. Likewise, some data may not be compressible or dedupable. However, identical copies can be identified with links created to a common copy.

Economics (People, Budgets, Energy and other Constraints)

If one thing in life and technology that is constant is change, then the other constant is concern about economics or costs. There is a cost to enable and maintain a data infrastructure on premise or in the cloud, which exists to protect, preserve, and serve data and information applications.

However, there should also be a benefit to having the data infrastructure to house data and support applications that provide information to users of the services. A common economic focus is what something costs, either as up-front capital expenditure (CapEx) or as an operating expenditure (OpEx) expense, along with recurring fees.

In general, economic considerations include:

  • Budgets (CapEx and OpEx), both up front and in recurring fees
  • Whether you buy, lease, rent, subscribe, or use free and open sources
  • People time needed to integrate and support even free open-source software
  • Costs including hardware, software, services, power, cooling, facilities, tools
  • People time includes base salary, benefits, training and education

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Keep in mind that with Application Data Value Characteristics Everything Is Not The Same across various organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. All applications have some element of performance, availability, capacity, economic (PACE) needs as well as resource demands. There is often a focus around data storage about storage efficiency and utilization which is where data footprint reduction (DFR) techniques, tools, trends and as well as technologies address capacity requirements. However with data storage there is also an expanding focus around storage effectiveness also known as productivity tied to performance, along with availability including 4 3 2 1 data protection. Continue reading the next post (Part III Application Data Characteristics Types Everything Is Not The Same) in this series here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Characteristics Types Everything Is Not The Same

Application Data Characteristics Types Everything Is Not The Same

Application Data Characteristics Types Everything Is Not The Same

Application Data Characteristics Types Everything Is Not The Same

This is part three of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we continue looking at application and data characteristics with a focus on different types of data. There is more to data than simply being big data, fast data, big fast or unstructured, structured or semistructured, some of which has been touched on in this series, with more to follow. Note that there is also data in terms of the programs, applications, code, rules, policies as well as configuration settings, metadata along with other items stored.

Application Data Value Software Defined Data Infrastructure Essentials Book SDDC

Various Types of Data

Data types along with characteristics include big data, little data, fast data, and old as well as new data with a different value, life-cycle, volume and velocity. There are data in files and objects that are big representing images, figures, text, binary, structured or unstructured that are software defined by the applications that create, modify and use them.

There are many different types of data and applications to meet various business, organization, or functional needs. Keep in mind that applications are based on programs which consist of algorithms and data structures that define the data, how to use it, as well as how and when to store it. Those data structures define data that will get transformed into information by programs while also being stored in memory and on data stored in various formats.

Just as various applications have different algorithms, they also have different types of data. Even though everything is not the same in all environments, or even how the same applications get used across various organizations, there are some similarities. Even though there are different types of applications and data, there are also some similarities and general characteristics. Keep in mind that information is the result of programs (applications and their algorithms) that process data into something useful or of value.

Data typically has a basic life cycle of:

  • Creation and some activity, including being protected
  • Dormant, followed by either continued activity or going inactive
  • Disposition (delete or remove)

In general, data can be

  • Temporary, ephemeral or transient
  • Dynamic or changing (“hot data”)
  • Active static on-line, near-line, or off-line (“warm-data”)
  • In-active static on-line or off-line (“cold data”)

Data is organized

  • Structured
  • Semi-structured
  • Unstructured

General data characteristics include:

  • Value = From no value to unknown to some or high value
  • Volume = Amount of data, files, objects of a given size
  • Variety = Various types of data (small, big, fast, structured, unstructured)
  • Velocity = Data streams, flows, rates, load, process, access, active or static

The following figure shows how different data has various values over time. Data that has no value today or in the future can be deleted, while data with unknown value can be retained.

Different data with various values over time

Application Data Value across sddc
Data Value Known, Unknown and No Value

General characteristics include the value of the data which in turn determines its performance, availability, capacity, and economic considerations. Also, data can be ephemeral (temporary) or kept for longer periods of time on persistent, non-volatile storage (you do not lose the data when power is turned off). Examples of temporary scratch include work and scratch areas such as where data gets imported into, or exported out of, an application or database.

Data can also be little, big, or big and fast, terms which describe in part the size as well as volume along with the speed or velocity of being created, accessed, and processed. The importance of understanding characteristics of data and how their associated applications use them is to enable effective decision-making about performance, availability, capacity, and economics of data infrastructure resources.

Data Value

There is more to data storage than how much space capacity per cost.

All data has one of three basic values:

  • No value = ephemeral/temp/scratch = Why keep it?
  • Some value = current or emerging future value, which can be low or high = Keep
  • Unknown value = protect until value is unlocked, or no remaining value

In addition to the above basic three, data with some value can also be further subdivided into little value, some value, or high value. Of course, you can keep subdividing into as many more or different categories as needed, after all, everything is not always the same across environments.

Besides data having some value, that value can also change by increasing or decreasing in value over time or even going from unknown to a known value, known to unknown, or to no value. Data with no value can be discarded, if in doubt, make and keep a copy of that data somewhere safe until its value (or lack of value) is fully known and understood.

The importance of understanding the value of data is to enable effective decision-making on where and how to protect, preserve, and cost-effectively store the data. Note that cost-effective does not necessarily mean the cheapest or lowest-cost approach, rather it means the way that aligns with the value and importance of the data at a given point in time.

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software-defined data center (SDDC), software-defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Data has different value at various times, and that value is also evolving. Everything Is Not The Same across various organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. Continue reading the next post (Part IV Application Data Volume Velocity Variety Everything Not The Same) in this series here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Volume Velocity Variety Everything Is Not The Same

Application Data Volume Velocity Variety Everything Not The Same

Application Data Volume Velocity Variety Everything Is Not The Same

Application Data Volume Velocity Variety Everything Not The Same

This is part four of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we continue looking at application and data characteristics with a focus on data volume velocity and variety, after all, everything is not the same, not to mention many different aspects of big data as well as little data.

Application Data Value Software Defined Data Infrastructure Essentials Book SDDC

Volume of Data

More data is growing at a faster rate every day, and that data is being retained for longer periods. Some data being retained has known value, while a growing amount of data has an unknown value. Data is generated or created from many sources, including mobile devices, social networks, web-connected systems or machines, and sensors including IoT and IoD. Besides where data is created from, there are also many consumers of data (applications) that range from legacy to mobile, cloud, IoT among others.

Unknown-value data may eventually have value in the future when somebody realizes that he can do something with it, or a technology tool or application becomes available to transform the data with unknown value into valuable information.

Some data gets retained in its native or raw form, while other data get processed by application program algorithms into summary data, or is curated and aggregated with other data to be transformed into new useful data. The figure below shows, from left to right and front to back, more data being created, and that data also getting larger over time. For example, on the left are two data items, objects, files, or blocks representing some information.

In the center of the following figure are more columns and rows of data, with each of those data items also becoming larger. Moving farther to the right, there are yet more data items stacked up higher, as well as across and farther back, with those items also being larger. The following figure can represent blocks of storage, files in a file system, rows, and columns in a database or key-value repository, or objects in a cloud or object storage system.

Application Data Value sddc
Increasing data velocity and volume, more data and data getting larger

In addition to more data being created, some of that data is relatively small in terms of the records or data structure entities being stored. However, there can be a large quantity of those smaller data items. In addition to the amount of data, as well as the size of the data, protection or overhead copies of data are also kept.

Another dimension is that data is also getting larger where the data structures describing a piece of data for an application have increased in size. For example, a still photograph was taken with a digital camera, cell phone, or another mobile handheld device, drone, or other IoT device, increases in size with each new generation of cameras as there are more megapixels.

Variety of Data

In addition to having value and volume, there are also different varieties of data, including ephemeral (temporary), persistent, primary, metadata, structured, semi-structured, unstructured, little, and big data. Keep in mind that programs, applications, tools, and utilities get stored as data, while they also use, create, access, and manage data.

There is also primary data and metadata, or data about data, as well as system data that is also sometimes referred to as metadata. Here is where context comes into play as part of tradecraft, as there can be metadata describing data being used by programs, as well as metadata about systems, applications, file systems, databases, and storage systems, among other things, including little and big data.

Context also matters regarding big data, as there are applications such as statistical analysis software and Hadoop, among others, for processing (analyzing) large amounts of data. The data being processed may not be big regarding the records or data entity items, but there may be a large volume. In addition to big data analytics, data, and applications, there is also data that is very big (as well as large volumes or collections of data sets).

For example, video and audio, among others, may also be referred to as big fast data, or large data. A challenge with larger data items is the complexity of moving over the distance promptly, as well as processing requiring new approaches, algorithms, data structures, and storage management techniques.

Likewise, the challenges with large volumes of smaller data are similar in that data needs to be moved, protected, preserved, and served cost-effectively for long periods of time. Both large and small data are stored (in memory or storage) in various types of data repositories.

In general, data in repositories is accessed locally, remotely, or via a cloud using:

  • Object and blobs stream, queue, and Application Programming Interface (API)
  • File-based using local or networked file systems
  • Block-based access of disk partitions, LUNs (logical unit numbers), or volumes

The following figure shows varieties of application data value including (left) photos or images, audio, videos, and various log, event, and telemetry data, as well as (right) sparse and dense data.

Application Data Value bits bytes blocks blobs bitstreams sddc
Varieties of data (bits, bytes, blocks, blobs, and bitstreams)

Velocity of Data

Data, in addition to having value (known, unknown, or none), volume (size and quantity), and variety (structured, unstructured, semi structured, primary, metadata, small, big), also has velocity. Velocity refers to how fast (or slowly) data is accessed, including being stored, retrieved, updated, scanned, or if it is active (updated, or fixed static) or dormant and inactive. In addition to data access and life cycle, velocity also refers to how data is used, such as random or sequential or some combination. Think of data velocity as how data, or streams of data, flow in various ways.

Velocity also describes how data is used and accessed, including:

  • Active (hot), static (warm and WORM), or dormant (cold)
  • Random or sequential, read or write-accessed
  • Real-time (online, synchronous) or time-delayed

Why this matters is that by understanding and knowing how applications use data, or how data is accessed via applications, you can make informed decisions. Also, having insight enables how to design, configure, and manage servers, storage, and I/O resources (hardware, software, services) to meet various needs. Understanding Application Data Value including the velocity of the data both for when it is created as well as when used is important for aligning the applicable performance techniques and technologies.

Where to learn more

Learn more about Application Data Value, application characteristics, performance, availability, capacity, economic (PACE) along with data protection, software-defined data center (SDDC), software-defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Data has different value, size, as well as velocity as part of its characteristic including how used by various applications. Keep in mind that with Application Data Value Characteristics Everything Is Not The Same across various organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. Continue reading the next post (Part V Application Data Access life cycle Patterns Everything Is Not The Same) in this series here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Access Lifecycle Patterns Everything Is Not The Same

Application Data Access Life cycle Patterns Everything Is Not The Same(Part V)

Application Data Access Life cycle Patterns Everything Is Not The Same

Application Data Access Life cycle Patterns Everything Is Not The Same

This is part five of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we look at various application and data lifecycle patterns as well as wrap up this series.

Application Data Value Software Defined Data Infrastructure Essentials Book SDDC

Active (Hot), Static (Warm and WORM), or Dormant (Cold) Data and Lifecycles

When it comes to Application Data Value, a common question I hear is why not keep all data?

If the data has value, and you have a large enough budget, why not? On the other hand, most organizations have a budget and other constraints that determine how much and what data to retain.

Another common question I get asked (or told) it isn’t the objective to keep less data to cut costs?

If the data has no value, then get rid of it. On the other hand, if data has value or unknown value, then find ways to remove the cost of keeping more data for longer periods of time so its value can be realized.

In general, the data life cycle (called by some cradle to grave, birth or creation to disposition) is created, save and store, perhaps update and read with changing access patterns over time, along with value. During that time, the data (which includes applications and their settings) will be protected with copies or some other technique, and eventually disposed of.

Between the time when data is created and when it is disposed of, there are many variations of what gets done and needs to be done. Considering static data for a moment, some applications and their data, or data and their applications, create data which is for a short period, then goes dormant, then is active again briefly before going cold (see the left side of the following figure). This is a classic application, data, and information life-cycle model (ILM), and tiering or data movement and migration that still applies for some scenarios.

Application Data Value
Changing data access patterns for different applications

However, a newer scenario over the past several years that continues to increase is shown on the right side of the above figure. In this scenario, data is initially active for updates, then goes cold or WORM (Write Once/Read Many); however, it warms back up as a static reference, on the web, as big data, and for other uses where it is used to create new data and information.

Data, in addition to its other attributes already mentioned, can be active (hot), residing in a memory cache, buffers inside a server, or on a fast storage appliance or caching appliance. Hot data means that it is actively being used for reads or writes (this is what the term Heat map pertains to in the context of the server, storage data, and applications. The heat map shows where the hot or active data is along with its other characteristics.

Context is important here, as there are also IT facilities heat maps, which refer to physical facilities including what servers are consuming power and generating heat. Note that some current and emerging data center infrastructure management (DCIM) tools can correlate the physical facilities power, cooling, and heat to actual work being done from an applications perspective. This correlated or converged management view enables more granular analysis and effective decision-making on how to best utilize data infrastructure resources.

In addition to being hot or active, data can be warm (not as heavily accessed) or cold (rarely if ever accessed), as well as online, near-line, or off-line. As their names imply, warm data may occasionally be used, either updated and written, or static and just being read. Some data also gets protected as WORM data using hardware or software technologies. WORM (immutable) data, not to be confused with warm data, is fixed or immutable (cannot be changed).

When looking at data (or storage), it is important to see when the data was created as well as when it was modified. However, you should avoid the mistake of looking only at when it was created or modified: Instead, also look to see when it was the last read, as well as how often it is read. You might find that some data has not been updated for several years, but it is still accessed several times an hour or minute. Also, keep in mind that the metadata about the actual data may be being updated, even while the data itself is static.

Also, look at your applications characteristics as well as how data gets used, to see if it is conducive to caching or automated tiering based on activity, events, or time. For example, there is a large amount of data for an energy or oil exploration project that normally sits on slower lower-cost storage, but that now and then some analysis needs to run on.

Using data and storage management tools, given notice or based on activity, which large or big data could be promoted to faster storage, or applications migrated to be closer to the data to speed up processing. Another example is weekly, monthly, quarterly, or year-end processing of financial, accounting, payroll, inventory, or enterprise resource planning (ERP) schedules. Knowing how and when the applications use the data, which is also understanding the data, automated tools, and policies, can be used to tier or cache data to speed up processing and thereby boost productivity.

All applications have performance, availability, capacity, economic (PACE) attributes, however:

  • PACE attributes vary by Application Data Value and usage
  • Some applications and their data are more active than others
  • PACE characteristics may vary within different parts of an application
  • PACE application and data characteristics along with value change over time

Read more about Application Data Value, PACE and application characteristics in Software Defined Data Infrastructure Essentials (CRC Press 2017).

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Keep in mind that Application Data Value everything is not the same across various organizations, data centers, data infrastructures, data and the applications that use them.

Also keep in mind that there is more data being created, the size of those data items, files, objects, entities, records are also increasing, as well as the speed at which they get created and accessed. The challenge is not just that there is more data, or data is bigger, or accessed faster, it’s all of those along with changing value as well as diverse applications to keep in perspective. With new Global Data Protection Regulations (GDPR) going into effect May 25, 2018, now is a good time to assess and gain insight into what data you have, its value, retention as well as disposition policies.

Remember, there are different data types, value, life-cycle, volume and velocity that change over time, and with Application Data Value Everything Is Not The Same, so why treat and manage everything the same?

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Use Intel Optane NVMe U.2 SFF 8639 SSD drive in PCIe slot

Use NVMe U.2 SFF 8639 disk drive form factor SSD in PCIe slot

server storage I/O data infrastructure trends

Need to install or use an Intel Optane NVMe 900P or other Nonvolatile Memory (NVM) Express NVMe based U.2 SFF 8639 disk drive form factor Solid State Device (SSD) into PCIe a slot?

For example, I needed to connect an Intel Optane NVMe 900P U.2 SFF 8639 drive form factor SSD into one of my servers using an available PCIe slot.

The solution I used was an carrier adapter card such as those from Ableconn (PEXU2-132 NVMe 2.5-inch U.2 [SFF-8639] via Amazon.com among other global venues.

xxxx
Top Intel 750 NVMe PCIe AiC SSD, bottom Intel Optane NVMe 900P U.2 SSD with Ableconn carrier

The above image shows top an Intel 750 NVMe PCIe Add in Card (AiC) SSD and on the bottom an Intel Optane NVMe 900P 280GB U.2 (SFF 8639) drive form factor SSD mounted on an Ableconn carrier adapter.

NVMe server storage I/O sddc

NVMe Tradecraft Refresher

NVMe is the protocol that is implemented with different topologies including local via PCIe using U.2 aka SFF-8639 (aka disk drive form factor), M.2 aka Next Generation Form Factor (NGFF) also known as "gum stick", along with PCIe Add in Card (AiC). NVMe accessed devices can be installed in laptops, ultra books, workstations, servers and storage systems using the various form factors. U.2 drives are also refereed to by some as PCIe drives in that the NVMe command set protocol is implemented using PCIe x4 physical connection to the devices. Jump ahead if you want to skip over the NVMe primer refresh material to learn more about U.2 8639 devices.

data infrastructure nvme u.2 8639 ssd
Various SSD device form factors and interfaces

In addition to form factor, NVMe devices can be direct attached and dedicated, rack and shared, as well as accessed via networks also known as fabrics such as NVMe over Fabrics.

NVMeoF FC-NVMe NVMe fabric SDDC
The many facets of NVMe as a front-end, back-end, direct attach and fabric

Context is important with NVMe in that fabric can mean NVMe over Fibre Channel (FC-NVMe) where the NVMe command set protocol is used in place of SCSI Fibre Channel Protocol (e.g. SCSI_FCP) aka FCP or what many simply know and refer to as Fibre Channel. NVMe over Fabric can also mean NVMe command set implemented over an RDMA over Converged Ethernet (RoCE) based network.

NVM and NVMe accessed flash SCM SSD storage

Another point of context is not to confuse Nonvolatile Memory (NVM) which are the storage or memory media and NVMe which is the interface for accessing storage (e.g. similar to SAS, SATA and others). As a refresher, NVM or the media are the various persistent memories (PM) including NVRAM, NAND Flash, 3D XPoint along with other storage class memories (SCM) used in SSD (in various packaging).

Learn more about 3D XPoint with the following resources:

Learn more (or refresh) your NVMe server storage I/O knowledge, experience tradecraft skill set with this post here. View this piece here looking at NVM vs. NVMe and how one is the media where data is stored, while the other is an access protocol (e.g. NVMe). Also visit www.thenvmeplace.com to view additional NVMe tips, tools, technologies, and related resources.

NVMe U.2 SFF-8639 aka 8639 SSD

On quick glance, an NVMe U.2 SFF-8639 SSD may look like a SAS small form factor (SFF) 2.5" HDD or SSD. Also, keep in mind that HDD and SSD with SAS interface have a small tab to prevent inserting them into a SATA port. As a reminder, SATA devices can plug into SAS ports, however not the other way around which is what the key tab function does (prevents accidental insertion of SAS into SATA). Looking at the left-hand side of the following image you will see an NVMe SFF 8639 aka U.2 backplane connector which looks similar to a SAS port.

Note that depending on how implemented including its internal controller, flash translation layer (FTL), firmware and other considerations, an NVMe U.2 or 8639 x4 SSD should have similar performance to a comparable NVMe x4 PCIe AiC (e.g. card) device. By comparable device, I mean the same type of NVM media (e.g. flash or 3D XPoint), FTL and controller. Likewise generally an PCIe x8 should be faster than an x4, however more PCIe lanes does not mean more performance, its what’s inside and how those lanes are actually used that matter.

NVMe U.2 8639 2.5" 1.8" SSD driveNVMe U.2 8639 2.5 1.8 SSD drive slot pin
NVMe U.2 SFF 8639 Drive (Software Defined Data Infrastructure Essentials CRC Press)

With U.2 devices the key tab that prevents SAS drives from inserting into a SATA port is where four pins that support PCIe x4 are located. What this all means is that a U.2 8639 port or socket can accept an NVMe, SAS or SATA device depending on how the port is configured. Note that the U.2 8639 port is either connected to a SAS controller for SAS and SATA devices or a PCIe port, riser or adapter.

On the left of the above figure is a view towards the backplane of a storage enclosure in a server that supports SAS, SATA, and NVMe (e.g. 8639). On the right of the above figure is the connector end of an 8639 NVM SSD showing addition pin connectors compared to a SAS or SATA device. Those extra pins give PCIe x4 connectivity to the NVMe devices. The 8639 drive connectors enable a device such as an NVM, or NAND flash SSD to share a common physical storage enclosure with SAS and SATA devices, including optional dual-pathing.

More PCIe lanes may not mean faster performance, verify if those lanes (e.g. x4 x8 x16 etc) are present just for mechanical (e.g. physical) as well as electrical (they are also usable) and actually being used. Also, note that some PCIe storage devices or adapters might be for example an x8 for supporting two channels or devices each at x4. Likewise, some devices might be x16 yet only support four x4 devices.

NVMe U.2 SFF 8639 PCIe Drive SSD FAQ

Some common questions pertaining NVMe U.2 aka SFF 8639 interface and form factor based SSD include:

Why use U.2 type devices?

Compatibility with what’s available for server storage I/O slots in a server, appliance, storage enclosure. Ability to mix and match SAS, SATA and NVMe with some caveats in the same enclosure. Support higher density storage configurations maximizing available PCIe slots and enclosure density.

Is PCIe x4 with NVMe U.2 devices fast enough?

While not as fast as a PCIe AiC that fully supports x8 or x16 or higher, an x4 U.2 NVMe accessed SSD should be plenty fast for many applications. If you need more performance, then go with a faster AiC card.

Why not go with all PCIe AiC?

If you need the speed, simplicity, have available PCIe card slots, then put as many of those in your systems or appliances as possible. Otoh, some servers or appliances are PCIe slot constrained so U.2 devices can be used to increase the number of devices attached to a PCIe backplane while also supporting SAS, SATA based SSD or HDDs.

Why not use M.2 devices?

If your system or appliances supports NVMe M.2 those are good options. Some systems even support a combination of M.2 for local boot, staging, logs, work and other storage space while PCIe AiC are for performance along with U.2 devices.

Why not use NVMeoF?

Good question, why not, that is, if your shared storage system supports NVMeoF or FC-NVMe go ahead and use that, however, you might also need some local NVMe devices. Likewise, if yours is a software-defined storage platform that needs local storage, then NVMe U.2, M.2 and AiC or custom cards are an option. On the other hand, a shared fabric NVMe based solution may support a mixed pool of SAS, SATA along with NVMe U.2, M.2, AiC or custom cards as its back-end storage resources.

When not to use U.2?

If your system, appliance or enclosure does not support U.2 and you do not have a need for it. Or, if you need more performance such as from an x8 or x16 based AiC, or you need shared storage. Granted a shared storage system may have U.2 based SSD drives as back-end storage among other options.

How does the U.2 backplane connector attach to PCIe?

Via enclosures backplane, there is either a direct hardwire connection to the PCIe backplane, or, via a connector cable to a riser card or similar mechanism.

Does NVMe replace SAS, SATA or Fibre Channel as an interface?

The NVMe command set is an alternative to the traditional SCSI command set used in SAS and Fibre Channel. That means it can replace, or co-exist depending on your needs and preferences for access various storage devices.

Who supports U.2 devices?

Dell has supported U.2 aka PCIe drives in some of their servers for many years, as has Intel and many others. Likewise, U.2 8639 SSD drives including 3D Xpoint and NAND flash-based are available from Intel among others.

Can you have AiC, U.2 and M.2 devices in the same system?

If your server or appliance or storage system support them then yes. Likewise, there are M.2 to PCIe AiC, M.2 to SATA along with other adapters available for your servers, workstations or software-defined storage system platform.

NVMe U.2 carrier to PCIe adapter

The following images show examples of mounting an Intel Optane NVMe 900P accessed U.2 8639 SSD on an Ableconn PCIe AiC carrier. Once U.2 SSD is mounted, the Ableconn adapter inserts into an available PCIe slot similar to other AiC devices. From a server or storage appliances software perspective, the Ableconn is a pass-through device so your normal device drivers are used, for example VMware vSphere ESXi 6.5 recognizes the Intel Optane device, similar with Windows and other operating systems.

intel optane 900p u.2 8639 nvme drive bottom view
Intel Optane NVMe 900P U.2 SSD and Ableconn PCIe AiC carrier

The above image shows the Ableconn adapter carrier card along with NVMe U.2 8639 pins on the Intel Optane NVMe 900P.

intel optane 900p u.2 8639 nvme drive end view
Views of Intel Optane NVMe 900P U.2 8639 and Ableconn carrier connectors

The above image shows an edge view of the NVMe U.2 SFF 8639 Intel Optane NVMe 900P SSD along with those on the Ableconn adapter carrier. The following images show an Intel Optane NVMe 900P SSD installed in a PCIe AiC slot using an Ableconn carrier, along with how VMware vSphere ESXi 6.5 sees the device using plug and play NVMe device drivers.

NVMe U.2 8639 installed in PCIe AiC Slot
Intel Optane NVMe 900P U.2 SSD installed in PCIe AiC Slot

NVMe U.2 8639 and VMware vSphere ESXi
How VMware vSphere ESXi 6.5 sees NVMe U.2 device

Intel NVMe Optane NVMe 3D XPoint based and other SSDs

Here are some Amazon.com links to various Intel Optane NVMe 3D XPoint based SSDs in different packaging form factors:

Here are some Amazon.com links to various Intel and other vendor NAND flash based NVMe accessed SSDs including U.2, M.2 and AiC form factors:

Note in addition to carriers to adapt U.2 8639 devices to PCIe AiC form factor and interfaces, there are also M.2 NGFF to PCIe AiC among others. An example is the Ableconn M.2 NGFF PCIe SSD to PCI Express 3.0 x4 Host Adapter Card.

In addition to Amazon.com, Newegg.com, Ebay and many other venues carry NVMe related technologies.
The Intel Optane NVMe 900P are newer, however the Intel 750 Series along with other Intel NAND Flash based SSDs are still good price performers and as well as provide value. I have accumulated several Intel 750 NVMe devices over past few years as they are great price performers. Check out this related post Get in the NVMe SSD game (if you are not already).

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

NVMe accessed storage is in your future, however there are various questions to address including exploring your options for type of devices, form factors, configurations among other topics. Some NVMe accessed storage is direct attached and dedicated in laptops, ultrabooks, workstations and servers including PCIe AiC, M.2 and U.2 SSDs, while others are shared networked aka fabric based. NVMe over fabric (e.g. NVMeoF) includes RDMA over converged Ethernet (RoCE) as well as NVMe over Fibre Channel (e.g. FC-NVMe). Networked fabric accessed NVMe access of pooled shared storage systems and appliances can also include internal NVMe attached devices (e.g. as part of back-end storage) as well as other SSDs (e.g. SAS, SATA).

General wrap-up (for now) NVMe U.2 8639 and related tips include:

  • Verify the performance of the device vs. how many PCIe lanes exist
  • Update any applicable BIOS/UEFI, device drivers and other software
  • Check the form factor and interface needed (e.g. U.2, M.2 / NGFF, AiC) for a given scenario
  • Look carefully at the NVMe devices being ordered for proper form factor and interface
  • With M.2 verify that it is an NVMe enabled device vs. SATA

Learn more about NVMe at www.thenvmeplace.com including how to use Intel Optane NVMe 900P U.2 SFF 8639 disk drive form factor SSDs in PCIe slots as well as for fabric among other scenarios.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

HPE Announces AMD Powered Gen 10 ProLiant DL385 For Software Defined Workloads

HPE Announces AMD Powered Gen 10 ProLiant DL385 For Software Defined Workloads

server storage I/O data infrastructure trends

By Greg Schulzwww.storageioblog.com November 20, 2017

HPE Announced today a new AMD EPYC 7000 Powered Gen 10 ProLiant DL385 for Software Defined Workloads including server virtualization, software-defined data center (SDDC), software-defined data infrastructure (SDDI), software-defined storage among others. These new servers are part of a broader Gen10 HPE portfolio of ProLiant DL systems.

HPE AMD EPYC Gen10 DL385
24 Small Form Factor Drive front view DL385 Gen 10 Via HPE

The value proposition being promoted by HPE of these new AMD powered Gen 10 DL385 servers besides supporting software-defined, SDDI, SDDC, and related workloads are security, density and lower price than others. HPE is claiming with the new AMD EPYC system on a chip (SoC) processor powered Gen 10 DL385 that it is offering up to 50 percent lower cost per virtual machine (VM) than traditional server solutions.

About HPE AMD Powered Gen 10 DL385

HPE AMD EPYC 7000 Gen 10 DL385 features:

  • 2U (height) form factor
  • HPE OneView and iLO management
  • Flexible HPE finance options
  • Data Infrastructure Security
  • AMD EPYC 7000 System on Chip (SoC) processors
  • NVMe storage (Embedded M.2 and U.2/8639 Small Form Factor (SFF) e.g. drive form factor)
  • Address server I/O and memory bottlenecks

These new HPE servers are positioned for:

  • Software Defined, Server Virtualization
  • Virtual Desktop Infrastructure (VDI) workspaces
  • HPC, Cloud and other general high-density workloads
  • General Data Infrastructure workloads that benefit from memory-centric or GPUs

Different AMD Powered DL385 ProLiant Gen 10 Packaging Options

Common across AMD EPYC 7000 powered Gen 10 DL385 servers are 2U high form factor, iLO management software and interfaces, flexible LAN on Motherboard (LOM) options, MicroSD (optional dual MicroSD), NVMe (embedded M.2 and SFF U.2) server storage I/O interface and drives, health and status LEDs, GPU support, single or dual socket processors.

HPE AMD EPYC Gen10 DL385 Look Inside
HPE DL385 Gen10 Inside View Via HPE

HPE AMD EPYC Gen10 DL385 Rear View
HPE DL385 Gen10 Rear View Via HPE

Other up to three storage drive bays, support for Large Form Factor (LFF) and Small Form Factor (SFF) devices (HDD and SSD) including SFF NVMe (e.g., U.2) SSD. Up to 4 x Gbe NICs, PCIe riser for GPU (optional second riser requires the second processor). Other features and options include HPE SmartArray (RAID), up to 6 cooling fans, internal and external USB 3. Optional universal media bay that can also add a front display, optional Optical Disc Drive (ODD), optional 2 x U.2 NVMe SFF SSD. Note media bay occupies one of three storage drive bays.

HPE AMD EPYC Gen10 DL385 Form Factor
HPE DL385 Form Factor Via HPE

Up to 3 x Drive Bays
Up to 12 LFF drives (2 per bay)
Up to 24 SFF drives ( 3 x 8 drive bays, 6 SFF + 2 NVMe U.2 or 8 x NVMe)

AMD EPYC 7000 Series

The AMD EPYC 7000 series is available in the single and dual socket. View additional AMD EPYC speeds and feeds in this data sheet (PDF), along with AMD server benchmarks here.

HPE AMD EPYC Specifications
HPE DL385 Gen 10 AMD EPYC Specifications Via HPE

AMD EPYC 7000 General Features

  • Single and dual socket
  • Up to 32 cores, 64 threads per socket
  • Up to 16 DDR4 DIMMS over eight channels per socket (e.g., up to 2TB RAM)
  • Up to 128 PCIe Gen 3 lanes (e.g. combination of x4, x8, x16 etc)
  • Future 128GB DIMM support

AMD EPYC 7000 Security Features

  • Secure processor and secure boot for malware rootkit protection
  • System memory encryption (SME)
  • Secure Encrypted Virtualization (SEV) hypervisors and guest virtual machine memory protection
  • Secure move (e.g., encrypted) between enabled servers

Where To Learn More

Learn more about Data Infrastructure and related server technology, trends, tools, techniques, tradecraft and tips with the following links.

  • AMD EPYC 7000 System on Chip (SoC) processors
  • Gen10 HPE portfolio and ProLiant DL systems.
  • Various Data Infrastructure related news commentary, events, tips and articles
  • Data Center and Data Infrastructure industry links
  • Data Infrastructure server storage I/O network Recommended Reading List Book Shelf
  • Software Defined Data Infrastructure Essentials (CRC 2017) Book
  • What This All Means

    With the flexible options including HDD, SSD as well as NVMe accessible SSDs, large memory capacity along with computing cores, these new solutions provide good data infrastructure server density (e.g., CPU, memory, I/O, storage) per cubic foot or meter per cost.

    I look forward to trying one of these systems out for software-defined scenarios including virtualization, software-defined storage (SDS) among others workload scenarios. Overall the HPE announcement of the new AMD EPYC 7000 Powered Gen 10 ProLiant DL385 looks to be a good option for many environments.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Dell EMC VMware September 2017 Software Defined Data Infrastructure Updates

    Dell EMC VMware September 2017 Software Defined Data Infrastructure Updates

    server storage I/O data infrastructure trends

    Dell EMC VMware September 2017 Software Defined Data Infrastructure Updates

    vmworld 2017

    September was a busy month including VMworld in Las Vegas that featured many Dell EMC VMware (among other) software defined data infrastructure updates and announcements.

    A summary of September VMware (and partner) related announcements include:

    VMware on AWS via Amazon.com
    VMware and AWS via Amazon Web Services

    VMware and AWS

    Some of you might recall VMware earlier attempt at public cloud with vCloud Air service (see Server StorageIO lab test drive here) which has since been depreciated (e.g. retired). This new approach by VMware leverages the large global presence of AWS enabling customers to set up public or hybrid vSphere, vSAN and NSX based clouds, as well as software defined data centers (SDDC) and software defined data infrastructures (SDDI).

    VMware Cloud on AWS exists on a dedicated, single-tenant (unlike Elastic Cloud Compute (EC2) multi-tenant instances or VMs) that supports from 4 to 16 underlying host per cluster. Unlike EC2 virtual machine instances, VMware Cloud on AWS is delivered on elastic bare-metal (e.g. dedicated private servers aka DPS). Note AWS EC2 is more commonly known, AWS also has other options for server compute including Lambda micro services serverless containers, as well as Lightsail virtual private servers (VPS).

    Besides servers with storage optimized I/O featuring low latency NVMe accessed SSDs, and applicable underlying server I/O networking, VMware Cloud on AWS leverages the VMware software stack directly on underlying host servers (e.g. there is no virtualization nesting taking place). This means more robust performance should be expected like in your on premise VMware environment. VM workloads can move between your onsite VMware systems and VMware Cloud on AWS using various tools. The VMware Cloud on AWS is delivered and managed by VMware, including pricing. Learn more about VMware Cloud on AWS here, and here (VMware PDF) and here (VMware Hands On Lab aka HOL).

    Read more about AWS September news and related updates here in this StorageIOblog post.

    VMware PKS
    VMware and Pivotal PKS via VMware.com

    Pivotal Container Service (PKS) and Google Kubernetes Partnership

    During VMworld VMware, Pivotal and Google announced a partnership for enabling Kubernetes container management called PKS (Pivotal Container Service). Kubernetes is evolving as a popular open source container microservice serverless management orchestration platform that has roots within Google. What this means is that what is good for Google and others for managing containers, is now good for VMware and Pivotal. In related news, VMware has become a platinum sponsor of the Cloud Native Compute Foundation (CNCF). If you are not familiar with CNCF, add it to your vocabulary and learn more here at www.cncf.io.

    Other VMworld and September VMware related announcements

    Hyper converged data infrastructure provider Maxta has announced a VMware vSphere Escape Pod (parachute not included ;) ) to facilitate migration from ESXi based to Red Hat Linux hypervisor environments. IBM and VMware for cloud partnership, along with Dell EMC, IBM and VMware joint cloud solutions. White listing of VMware vSphere VMs for enhanced security combine with earlier announced capabilities.

    Note that both VMware with vSphere ESXi and Microsoft with Hyper-V (Windows and Azure based) are supporting various approaches for securing Virtual Machines (VMs) and the hosts they run on. These enhancements are moving beyond simply encrypting the VMDK or VHDX virtual disks the VMs reside in or use, as well as more than password, ssh and other security measures. For example Microsoft is adding support for host guarded fabrics (and machine hosts) as well as shielded VMs. Keep an eye on how both VMware and Microsoft extend the data protection and security capabilities for software defined data infrastructures for their solutions and services.

    Dell EMC Announcements

    At VMworld in September Dell EMC announcements included:

    • Hyper Converged Infrastructure (HCI) and Hybrid Cloud enhancements
    • Data Protection, Goverence and Management suite updates
    • XtremIO X2 all flash array (AFA) availability optimized for vSphere and VDI

    HCI and Hybrid Cloud enhancements include VxRail Appliance, VxRack SDDC (vSphere 6.5, vSAN 6.6, NSX 6.3) along with hybrid cloud platforms (Enterprise Hybrid Cloud and Native Hybrid Cloud) along with vSAN Ready Nodes (vSAN 6.6 and encryption) and VMware Ready System. Note that Dell EMC in addition to supporting VMware hybrid clouds also previously announced solutions for Microsoft Azure Stack back in May.

    Software Defined Data Infrastructure Essentials at VMworld Bookstore

    xxxx

    Software Defined Data Infrastructure Essentials (CRC Press) at VMworld bookstore

    My new book Software Defined Data Infrastructure Essentials (CRC Press) made its public debut in the VMware book store where I did a book signing event. You can get your copy of Software Defined Data Infrastructure Essentials which includes Software Defined Data Centers (SDDC) along with hybrid, multi-cloud, serverless, converged and related topics at Amazon among other venues. Learn more here.

    Where To Learn More

    Learn more about related technology, trends, tools, techniques, and tips with the following links.

    What This All Means

    A year ago at VMworld the initial conversations were started around what would become the VMware Cloud on AWS solution. Also a year ago besides VMware Integrated Containers (VIC) and some other pieces, the overall container and in particular related management story was a bit cloudy (pun intended). However, now the fog and cloud seem to be clearing with the PKS solution, along with details of VMware Cloud on AWS. Likewise vSphere, vSAN and NSX along with associated vRealize tools continue to evolve as well as customer deployment growing. All in all, VMware continues to evolve, let’s see how things progress now over the year until the next VMworld.

    By the way, if you have not heard, its Blogtober, check out some of the other blogs and posts occurring during October here.

    Ok, nuff said, for now.
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    Microsoft Azure September 2017 Software Defined Data Infrastructure Updates

    Microsoft Azure September 2017 Software Defined Data Infrastructure Updates

    server storage I/O data infrastructure trends

    Microsoft and Azure September 2017 Software Defined Data infrastructure Updates

    September was a busy month for data infrastructure topics as well as Microsoft in terms of new and enhanced technologies. Wrapping up September was Microsoft Ignite where Azure, Azure Stack, Windows, O365, AI, IoT, development tools announcements occurred, along with others from earlier in the month. As part of the September announcements, Microsoft released a new version of Windows server (e.g. 1709) that has a focus for enhanced container support. Note that if you have deployed Storage Spaces Direct (S2D) and are looking to upgrade to 1709, do your homework as there are some caveats that will cause you to wait for the next release. Note that there had been new storage related enhancements slated for the September update, however those were announced at Ignite to being pushed to the next semi-annual release. Learn more here and also here.

    Azure Files and NFS

    Microsoft made several Azure file storage related announcements and public previews during September including Native NFS based file sharing as companion to existing Azure Files, along with public preview of new Azure File Sync Service. Native NFS based file sharing (public preview announced, service is slated to be available in 2018) is a software defined storage deployment of NetApp OnTAP running on top of Azure data infrastructure including virtual machines and leverage Azure underlying storage.

    Note that the new native NFS is in addition to the earlier native Azure Files accessed via HTTP REST and SMB3 enabling sharing of files inside Azure public cloud, as well as accessible externally from Windows based and Linux platforms including on premises. Learn more about Azure Storage and Azure Files here.

    Azure File Sync (AFS)

    Azure File Sync AFS

    Azure File Sync (AFS) has now entered public preview. While users of Windows-based systems have been able to access and share Azure Files in the past, AFS is something different. I have used AFS for some time now during several private preview iterations having seen how it has evolved, along with how Microsoft listens incorporating feedback into the solution.

    Lets take a look at what is AFS, what it does, how it works, where and when to use it among other considerations. With AFS, different and independent systems can now synchronize file shares through Azure. Currently in the AFS preview Windows Server 2012 and 2016 are supported including bare metal, virtual, and cloud based. For example I have had bare metal, virtual (VMware), cloud (Azure and AWS) as part of participating in a file sync activities using AFS.

    Not to be confused with some other storage related AFS including Andrew File System among others, the new Microsoft Azure File Sync service enables files to be synchronized across different servers via Azure. This is different then the previous available Azure File Share service that enables files stored in Azure cloud storage to be accessed via Windows and Linux systems within Azure, as well as natively by Windows platforms outside of Azure. Likewise this is different from the recently announced Microsoft Azure native NFS file sharing serving service in partnership with NetApp (e.g. powered by OnTAP cloud).

    AFS can be used to synchronize across different on premise as well as cloud servers that can also function as cache. What this means is that for Windows work folders served via different on premise servers, those files can be synchronized across Azure to other locations. Besides providing a cache, cloud tiering and enterprise file sync share (EFSS) capabilities, AFS also has robust optimization for data movement to and from the cloud and across sites, along with management tools. Management tools including diagnostics, performance and activity monitoring among others.

    Check out the AFS preview including planning for an Azure File Sync (preview) deployment (Docs Microsoft), and for those who have Yammer accounts, here is the AFS preview group link.

    Microsoft Azure Blob Events via Microsoft

    Azure Blob Storage Tiering and Event Triggers

    Two other Azure storage features that are in public preview include blob tiering (for cold archiving) and event triggers for events. As their names imply, blob tiering enables automatic migration from active to cold inactive storage of dormant date. Event triggers are policies rules (code) that get executed when a blob is stored to do various functions or tasks. Here is an overview of blob events and a quick start from Microsoft here.

    Keep in mind that not all blob and object storage are the same, a good example is Microsoft Azure that has page, block and append blobs. Append blobs are similar to what you might be familiar with other services objects. Here is a Microsoft overview of various Azure blobs including what to use when.

    Project Honolulu and Windows Server Enhancements

    Microsoft has evolved from command prompt (e.g. early MSDOS) to GUI with Windows to command line extending into PowerShell that left some thinking there is no longer need for GUI. Even though Microsoft has extended its CLI with PowerShell spanning WIndows platforms and Azure, along with adding Linux command shell, there are those who still want or need a GUI. Project Honolulu is the effort to bring GUI based management back to Windows in a simplified way for what had been headless, and desktop less deployments (e.g. Nano, Server Core). Microsoft had Server Management Tools (SMT) accessible via the Azure Portal which has been discontinued.


    Project Honolulu Image via Microsoft.com

    This is where project Honolulu comes into play for managing Windows Server platforms. What this means is that for those who dont want to rely on or have a PowerShell dependency have an alternative option. Learn more about Project Honolulu here and here, including download the public preview here.

    Storage Spaces Direct (S2D) Kepler Appliance

    Data Infrastructure provider DataOn has announced a new turnkey Windows Server 2016 Storage Spaces Direct (S2D) powered Hyper-Converged Infrastructure (e.g. productization of project Kepler-47) solution with two node small form factor servers (partner with MSI). How small? Think suitcase or airplane roller board carry on luggage size.

    What this means is that you can get into the converged, hyper-converged software defined storage game with Windows-based servers supporting Hyper-V virtual machines (Windows and Linux) including hardware for around $10,000 USD (varies by configuration and other options).

    Azure and Microsoft Networking News

    Speaking of Microsoft Azure public cloud, ever wonder what the network that enables the service looks like and some of the software defined networking (SDN) along with network virtualization function (NFV) objectives are, have a look at this piece from over at Data Center Knowledge.

    In related Windows, Azure and other focus areas, Microsoft, Facebook and Telxius have completed the installation of a high-capacity subsea cable (network) to cross the atlantic ocean. Whats so interesting from a data infrastructure, cloud or legacy server storage I/O and data center focus perspective? The new network was built by the combined companies vs. in the past by a Telco provider consortium with the subsequent bandwidth sold or leased to others.

    This new network is also 4,000 miles long including in depths of 11,000 feet, supports with current optics 160 terabits (e.g. 20 TeraBytes) per second capable of supporting 71 million HD videos streamed simultaneous. To put things into perspective, some residential Fiber Optic services can operate best case up to 1 gigabit per second (line speed) and in an asymmetrical fashion (faster download than uploads). Granted there are some 10 Gbit based services out there more common with commercial than residential. Simply put, there is a large amount of bandwidth increased across the atlantic for Microsoft and Facebook to support growing demands.

    Where To Learn More

    Learn more about related technology, trends, tools, techniques, and tips with the following links.

    What This All Means

    Microsoft announced a new release of Windows Server at Ignite as part of its new semi-annual release cycle. This latest version of Windows server is optimized for containers. In addition to Windows server enhancements, Microsoft continues to extend Azure and related technologies for public, private and hybrid cloud as well as software defined data infrastructures.

    By the way, if you have not heard, its Blogtober, check out some of the other blogs and posts occurring during October here.

    Ok, nuff said, for now.
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.