Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform (GCP) has announced a new Los Angeles Region (e.g., uswest-2) with three initial Availability Zones (AZ) also known as data centers. Keep in mind that a region is a geographic area that is made up of two or more AZ’s. Thus, a region has multiple data centers for availability, resiliency, durability.

The new GCP uswest-2 region is the fifth in the US and seventh in the Americas. GCP regions (and AZ’s) in the Americas include Iowa (us-central1), Montreal Quebec Canada (northamerica-northeast1), Northern Virginia (us-east4), Oregon (us-west1), Los Angeles (us-west2), South Carolina (us-east1) and Sao Paulo Brazil (southamerica-east1). View other Geographies as well as services including Europe and the Asia-Pacific here.

How Does GCP Compare to AWS and Azure?

The following are simple graphical comparisons of what Amazon Web Services (AWS) and Microsoft Azure currently have deployed for regions and AZ’s across different geographies. Note, each region may have a different set of services available so check your cloud providers notes as to what is currently available at various locations.

Google Cloud Compute Platform regions
Google Compute Platform Locations (Regions and AZ’s) Image via Google.com

AWS Regions, AZ locations
AWS Regions and AZ’s image Via AWS.com

Microsoft Azure Cloud Region Locations
Microsoft Azure Regions and AZ’s image Via Azure.com

Where to learn more

Learn more about data infrastructures and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Google continues to evolve its public cloud platform (GCP) both regarding geographical global physical locations (e.g., regions and AZ’s), also regarding feature, function, extensibility. By adding a new Los Angeles (e.g. uswest-2) Region and three AZ’s within it, Google is providing a local point of presence for data infrastructure intense (server compute, memory, I/O, storage) applications such as those in media, entertainment, high performance compute, aerospace among others in the southern California region.  Overall, Google Cloud Platform GCP announced new Los Angeles Region is good to see not only new features being added to GCP but also physical points of presences.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

Here is the 2018 Hot Popular New Trending Data Infrastructure Vendors To Watch which includes startups as well as established vendors doing new things. This piece follows last year’s hot favorite trending data infrastructure vendors to watch list (here), as well as who will be top of storage world in a decade piece here.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Data Infrastructures Support Information Systems Applications and Their Data

Data Infrastructures are what exists inside physical data centers and cloud availability zones (AZ) that are defined to provide traditional, as well as cloud services. Cloud and legacy data infrastructures are combined by hardware (server, storage, I/O network), software along with management tools, policies, tradecraft techniques (skills), best practices to support applications and their data. There are different types of data infrastructures to meet the needs of various environments that range in size, scope, focus, application workloads, along with Performance and capacity.

Another important aspect of data infrastructures is that they exist to protect, preserve, secure and serve applications that transform data into information. This means that availability and Data Protection including archive, backup, business continuance (BC), business resiliency (BR), disaster recovery (DR), privacy and security among other related topics, technology, techniques, and trends are essential data infrastructure topics.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Different timelines of adoption and deployment for various audiences

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

Some of those on this year’s list are focused on different technology areas, while others on size or types of vendors, suppliers, service providers. Others on the list are focused on who is new, startup, evolving, or established which varies from if you are an industry insider or IT customer environment. Meanwhile others new and some are established doing new things, mix of some you may not have heard of for those who want or need to have the most current list to rattle off startups for industry adoption (and deployment), as well as what some established players are doing that might lead to customer deployment (and adoption).

AMD – The AMD EPYC family of processors is opening up new opportunities for AMD to challenge Intel among others for a more significant share of the general-purpose compute market in support of data center and data infrastructure markets. An advantage that AMD has and is playing to in the industry speeds feeds, slots and watts price performance game is the ability to support more memory and PCIe lanes per socket than others including Intel. Keep in mind that PCIe lanes will become even more critical as NVMe deployment increases, as well as the use of GPU’s and faster Ethernet among other devices. Name brand vendors including Dell and HPE among others have announced or are shipping AMD EPYC based processors.

Aperion – Cloud and managed service provider with diverse capabilities.

Amazon Web Services (AWS) – Continues to expand its footprint regarding regions, availability zones (AZ) also known as data centers in regions, as well as some services along with the breadth of those capabilities. AWS has recently announced a new Snowball Edge (SBE) which in the past has been a data migration appliance now enhanced with on-prem Elastic Cloud Compute (EC2) capabilities. What this means is that AWS can put on-prem compute capabilities as part of a storage appliance for short-term data movement, migration, conversion, importing of virtual machines and other items.

On the other hand, AWS can also be seen as using SBE as a first entry to placing equipment on-prem for hybrid clouds, or, converged infrastructure (CI), hyper-converged infrastructure (HCI), cloud in a box similar to Microsoft Azure Stack, as well as CI/HCI solutions from others.

My prediction near term, however, is that CI/HCI vendors will either ignore SBE, downplay it, create some new marketing on why it is not CI/HCI or fud about vendor lock-in. In other words, make some popcorn and sit back, watch the show.

Backblaze – Low-cost, high-capacity cloud storage for backup and archiving provider known for their quarterly disk drive reliability ratings (or failure) reports. They have been around for a while, have a good reputation among those who use their services for being a low-cost alternative to the larger providers.

Barefoot networks – Some of you may already be aware of or following Barefoot Networks, while others may not have heard of them outside of the networking space. They have some impressive capabilities, are new, you probably have not heard of them, thus an excellent addition to this list.

Cloudian – Continue to evolve and no longer just another object storage solution, Cloudian has been expanding via organic technology development, as well as acquisitions giving them a broad portfolio of software-defined storage and tiering from on-prem to the cloud, block, file and object access.

Cloudflare – Not exactly a startup, some of you may know or are using Cloudflare, while to others, their role as a web cache, DNS, and other service is transparent. I have been using Cloudflare on my various sites for over a year, and like the security, DNS, cache and analytics tools they provide as a customer.

Cobalt Iron – For some, they might be new, Software-defined Data protection and management is the name of the game over at Cobalt Iron which has been around a few years under the radar compared to more popular players. If you have or are involved with IBM Tivoli aka TSM based backup and data protection among others, check out the exciting capabilities that Cobalt can bring to the table.

CTERA – Having been around for a while, to some they might not be a startup, on the other hand, they may be new to others while offering new data and file management options to others.

DataCore – You might know of DataCore for their software-defined storage and past storage hypervisor activity. However, they have a new piece of software MaxParallel that boost server storage I/O performance. The software installs on your Windows Server instance (bare metal, VM, or cloud instance) and shows you performance with and without acceleration which you can dynamically turn off and off.

DataDirect Networks (DDN) – Recently acquired Lustre assets from Intel, now picking up the storage startup Tintri pieces after it ceased operations. What this means is that while beefing up their traditional High-Performance Compute (HPC) and Super Compute (SC) focus, DDN is also expanding into broader markets.

Dell Technologies – At its recent Dell Technology World event in Las Vegas during late April, early May 2018, several announcements were made, including some tied to emerging Gen-Z along with composability. More recently, Dell Technologies along with VMware announced business structure and finance changes. Changes include VMware declaring a dividend, Dell Technologies being its largest shareholder will use proceeds to fund restricting and debt service. Read more about VMware and Dell Technology business and financial changes here.

Densify – With a name like Densify no surprise they propose to drive densification and automation with AI-powered deep learning to optimize application resource use across on-prem software-defined virtual as well as cloud instances and containers.

FlureDB – If you are into databases (SQL or NoSQL), as well as Blockchain or distributed ledgers, check out FlureDB.

Innovium.com – When it comes to data infrastructure and data center networking, Innovium is probably not on your radar, however, keep an eye on these folks and their TERALYNX switching silicon to see where it ends up given their performance claims.

Komprise – File, and data management solutions including tiering along with partners such as IBM.

Kubernetes – A few years ago OpenStack, then Docker containers was the favorite and trending discussion topic, then Mesos and along comes Kubernetes. It’s safe to say, at least for now, Kubernetes is settling in as a preferred open source industry and customer defecto choice (I want to say standard, however, will hold off on that for now) for container and related orchestration management. Besides, do it yourself (DiY) leveraging open source, there are also managed AWS Elastic Kubernetes Service (EKS), Azure Kubernetes Services (AKS), Google Kubernetes Engine (GKE), and VMware Pivotal Container Service (PKS) among others. Besides Azure, Microsoft also includes Kubernetes support (along with Docker and Windows containers) as part of Windows Servers.

ManageEngine (part of Zoho) – Has data infrastructure monitoring technology called OpManager for keeping an eye on networking.

Marvel – Marvel may not be a familiar name (don’t confuse with comics), however, has been a critical component supplier to partners whose server or storage technology you may be familiar with or have yourself. Server, Storage, I/O Networking chip maker has closed on its acquisition of Cavium (who previously bought Qlogic among others). The combined company is well positioned as a key data infrastructure component supplier to various partners spanning servers, storage, I/O networking including Fibre Channel (FC), Ethernet, InfiniBand, NVMe (and NVMeoF) among others.

Mellanox – Known for their InfiniBand adapters, switches, and associated software, along with growing presence in RDMA over Converged Ethernet (RoCE), they are also well positioned for NVMe over Fabrics among other growth opportunities following recent boardroom updates, along with technology roadmap’s.

Microsoft – Azure public cloud continues to evolve similarly to AWS with more region locations, availability zone (AZ) data centers, as well as features and extensions. Microsoft also introduced about a year ago its hybrid on-prem CI/HCI cloud in a box platform appliance Azure Stack (read about my test drive here). However, there is more to Microsoft than just their current cloud first focus which means Windows (desktop), as well as Server, are also evolving. Currently, in public preview, Windows Server 2019 insiders build available to try out many new capabilities, some of which were covered in the recent free Microsoft Virtual Summit held in June. Key themes of Windows Server 2019 include security, performance, hybrid cloud, containers, software-defined storage and much more.

Microsemi – Has been around for a while is the combination of some vendors you may not have heard of or heard about in some time including PMC-Sierra (acquired Adaptec) and Vitesse among others. The reason I have Microsemi on this list is a combination of their acquisitions which might be an indicator of whom they pick up next. Another reason is that their components span data infrastructure topics from servers, storage, I/O and networking, PCIe and many more.

NVIDIA – GPU high performance compute and related compute offload technologies have been accessible for over a decade. More recently with new graphics and computational demands, GPU such as those NVIDIA are in need. Demand includes traditional graphics acceleration for physical and virtual, augmented and virtual reality, as well as cloud, along with compute-intensive analytics, AI, ML, DL along with other cognitive workloads.

NGDSystems (NGD) – Similar to what NVIDIA and other GPU vendors do for enabling compute offload for specific applications and workloads, NGD is working on a variation. That variation is to move offload compute capabilities for the server I/O storage-intensive workloads closer, in fact into storage system components such as SSDs and emerging SCMs and PMEMs. Unlike GPU based applications or workloads that tend to be more memory and compute intensive, NGD is positioned for applications that are the server I/O and storage intensive.

The premise of NGD is that they move the compute and application closer to where the data is, eliminating extra I/O, as well as reducing the amount of main server memory and compute cycles. If you are familiar with other server storage I/O offload engines and systems such as Oracle Exadata database appliance NGD is working at a tighter integration granularity. How it works is your application gets ported to run on the NGD storage platform which is SSD based and having a general-purpose processor. Your application is initiated from a host server, where it then runs on the NGD meaning I/Os are kept local to the storage system. Keep in mind that the best I/O is the one that you do not have to do, the second best is the one with the least resource or user impact.

Opvisor – Performance activity and capacity monitoring tools including for VMware environments.

Pavillon – Startup with an interesting NVMe based hardware appliance.

Quest – Having gained their independence as a free-standing company since divestiture from Dell Technologies (Dell had previously acquired Quest before EMC acquisition), Quest continues to make their data infrastructure related management tools available. Besides now being a standalone company again, keep an eye on Quest to see how they evolve their existing data protection and data infrastructure resource management tools portfolio via growth, acquisition, or, perhaps Quest will be on somebody else’s future growth list.

Retrospect – Far from being a startup, after gaining their independence from when EMC bought them several years ago, they have since continued to enhance their data protection technology. Disclosure, I have been a Retrospect customer since 2001 using it for on-site, as well as cloud data protection backups to the cloud.

Rubrik – Becoming more of a data infrastructure household name given their expanding technology portfolio and marketing efforts. More commonly known in smaller customer environments, as well as broadly within industry insider circles, Rubrik has potential with continued technology evolution to move further upmarket similar to how Commvault did back in the late 90s, just saying.

SkyScale – Cloud service provider that offers dedicated bare metal, as well as private, hybrid cloud instances along with GPU to support AI, ML, DL and other high performance,  compute workloads.

Snowflake – The name does not describe well what they do or who they are. However, they have an interesting cloud data warehouse (old school) large-scale data lakes (new school) technologies.

Strongbox – Not to be confused with technology such as those from Iosafe (e.g., waterproof, fireproof), Strongbox is a data protection storage solution for storing archives, backups, BC/BR/DR data, as well as cloud tiering. For those who are into buzzword bingo, think cloud tiering, object, cold storage among others. The technology evolved out of Crossroads and with David Cerf at the helm has branched out into a private company with keeping an eye on.

Storbyte – With longtime industry insider sales and marketing pro-Diamond Lauffin (formerly Nexsan) involved as Chief Evangelist, this is worth keeping an eye on and could be entertaining as well as exciting. In some ways it could be seen as a bit of Nexsan meets NVme meets NAND Flash meets cost-effective value storage dejavu play.

Talon – Enterprise storage and management solutions for file sharing across organizations, ROBO and cloud environments.

Ubitqui – Also known as UBNT is a data infrastructure networking vendor whose technologies span from WiFi access points (AP), high-performance antennas, routing, switching and related hardware, along with software solutions. UBNT is not as well-known in more larger environments as a Cisco or others. However, they are making a name for themselves moving from the edge to the core. That is, working from the edge with AP and routers, firewalls, gateways for the SMB, ROBO, SOHO as well as consumer (I have several of their APs, switches, routers and high-performance antennas along with management software), these technologies are also finding their way into larger environments. 

My first use of UBNT was several years ago when I needed to get an IP network connection to a remote building separated by several hundred yards of forest. The solution I found was to get a pair of UBNT NANO Apps, put them in secure bridge mode; now I have a high-performance WiFi service through a forest of trees. Since then have replaced an older Cisco router, several Cisco, and other APs, as well as the phased migration of switches.

UpdraftPlus– If you have a WordPress web or blog site, you should also have a UpdraftPlus plugin (go premium btw) for data protection. I have been using Updraft for several years on my various sites to backup and protect the MySQL databases and all other content. For those of you who are familiar with Spanning (e.g., was acquired by EMC then divested by Dell) and what they do for cloud applications, UpdraftPlus does similar for lower-end, smaller cloud-based applications.

Vexata – Startup scale out NVMe storage solution.

VMware – Expanding their cloud foundation from on-prem to in and on clouds including AWS among others. Data Infrastructure focus continues to expand from core to edge, server, storage, I/O, networking. With recent Dell Technologies and VMware declaring a dividend, should be interesting to see what lies ahead for both entities.

What About Those Not Mentioned?

By the way, if you were wondering about or why others are not in the above list, simple, check out last year’s list which includes Apcera, Blue Medora, Broadcom, Chelsio, Commvault, Compuverde, Datadog, Datrium, Docker, E8 Storage, Elastifile, Enmotus, Everspin, Excelero, Hedvig, Huawei, Intel, Kubernetes, Liqid, Maxta, Micron, Minio, NetApp, Neuvector, Noobaa, NVIDA, Pivot3, Pluribus Networks, Portwork, Rozo Systems, ScaleMP, Storpool, Stratoscale, SUSE Technology, Tidalscale, Turbonomic, Ubuntu, Veeam, Virtuozzo and WekaIO. Note that many of the above have expanded their capabilities in the past year and remain, or have become even more interesting to watch, while some might be on the future where are they now list sometime down the road. View additional vendors and service providers via our industry links and resources page here.

What About New, Emerging, Trending and Trendy Technologies

Bitcoin and Blockchain storage startups, some of which claim or would like to replace cloud storage taking on giants such as AWS S3 in the not so distant future have been popping up lately. Some of these have good and exciting stories if they can deliver on the hype along with the premise. A couple of names to drop include among others Filecoin, Maidsafe, Sia, Storj along with services from AWS, Azure, Google and a long list of others.

Besides Blockchain distributed ledgers, other technologies and trends to keep an eye on include compute processes from ARM to SoC, GPU, FPGA, ASIC for offload and specialized processing. GPU, ASIC, and FPGA are appearing in new deployments across cloud providers as they look to offload processing from their general servers to derive total effective productivity out of them. In other words, innovating by offloading to boost their effective return on investment (old ROI), as well as increase their return on innovation (the new ROI).

Other data infrastructure server I/O which also ties into storage and network trends to watch include Gen-Z that some may claim as the successor to PCIe, Ethernet, InfiniBand among others (hint, get ready for a new round of “something is dead” hype). Near-term the objective of Gen-Z is to coexist, complement PCIe, Ethernet, CPU to memory interconnect, while enabling more granular allocation of data infrastructure resources (e.g., composability). Besides watching who is part of the Gen-Z movement, keep an eye on who is not part of it yet, specifically Intel.

NVMe and its many variations from a server internal to networked NVMe over Fabrics (NVMeoF) along with its derivatives continue to gain both industry adoption, as well as customer deployment. There are some early NVMeoF based server storage deployments (along with marketing dollars). However, the server side NVMe customer adoption is where the dollars are moving to the vendors. In other words, it’s still early in the bigger broader NVMe and NVMeoF game.

Where to learn more

Learn more about data infrastructures and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Let’s see how those mentioned last year as well as this year, along with some new and emerging vendors, service providers who did not get said end up next year, as well as the years after that.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Different timelines of adoption and deployment for various audiences

Keep in mind that there is a difference between industry adoption and customer deployment, granted they are related. Likewise let’s see who will be at the top in three, five and ten years, which means some of the current top or favorite vendors may or may not be on the list, same with some of the established vendors. Meanwhile, check out the 2018 Hot Popular New Trending Data Infrastructure Vendors to Watch.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

June 2018 Server StorageIO Data Infrastructure Update Newsletter

June 2018 Server StorageIO Data Infrastructure Update Newsletter

June 2018 Server StorageIO Data Infrastructure Update Newsletter

Volume 18, Issue 6 (June 2018)

Hello and welcome to the June 2018 Server StorageIO Data Infrastructure Update Newsletter.

In cased you missed it, the May 2018 Server StorageIO Data Infrastructure Update Newsletter can be viewed here (HTML and PDF).

In this issue buzzwords topics include AI, All Flash, HPC, Lustre, Multi Cloud, NVMe, NVMeoF, SAS, and SSD among others:

Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

June data infrastructure, server, storage, I/O network, hardware, software, cloud, converged, and container as well as data protection industry activity includes among others:

Check out what’s new at Amazon Web Services (AWS) here, as well as Microsoft Azure here, Google Cloud Compute here, IBM Softlayer here, and OVH here. CTERA announced new cloud storage gateways (HC Series) for enterprise environments that include all flash SSD options, capacity up to 96TB (raw), Petabyte scale tiering to public and private cloud, 10 Gbe Ethernet connectivity, virtual machine deployment, along with high availability configuration.

Cray announced enhancements to its Lustre (parallel file system) based ClusterStor storage system for high performance compute (HPC) along with it previously acquired from Seagate (Who had acquired it as part of the Xyratex acquisition). New enhancements for ClusterStor include all flash SSD solution that will integrate and work with our existing hard disk drive (HDD) based systems.

In related Lustre based activity, DataDirect Network (DDN) has acquired from Intel, their Lustre File system capability. Intel acquired its Lustre capabilities via its purchase of Whamcloud back in 2012, and in 2017 announced that it was getting out of the Lustre business (here and here). DDN also announced new storage solutions for enabling HPC environment workloads along with Artificial Intelligence (AI) centric applications.

HPE which held its Discover event announced a $4 Billion USD investment over four years pertaining to development of edge technologies and services including software defined WAN (SD-WAN) and security among others.

Microsoft held its first virtual Windows Server Summit in June that outlined current and future plans for the operating system along with its hybrid cloud future.

Penguin computing has announced the Accelion solution for accessing geographically dispersed data enabling faster file transfer or other data movement functions.

SwiftStack has added multi cloud features (enhanced search, universal access, policy management, automation, data migration) and making them available via 1space open source project. 1space enables a single object namespace across different object storage locations including integration with OpenStack Swift.

Vexata announced a new version of its Vexata operating system (VX-OS) for its storage solution including NVMe over Fabric (NVMe-oF) support.

Speaking of NVMe and fabrics, the Fibre Channel Industry Association (FCIA) announced that the International Committee on Information Technology Standards (INCITS) has published T11 technical committee developed  Fibre Channel over NVMe (FC-NVMe) standard.

NVMe frontend NVMeoF
Various NVMe front-end including NVMeoF along with NVMe back-end devices (U.2, M.2, AiC)

Keep in mind that there are many different facets to NVMe including direct attached (M.2, U.2/8639, PCIe AiC) along with fabrics. Likewise, there are various fabric options for the NVMe protocol including over Fibre Channel (FC-NVMe), along with other NVMe over Fabrics including RDMA over Converged Ethernet (RoCE) as well as IP based among others. NVMe can be used as a front-end on storage systems supporting server attachment (e.g. competes with Fibre Channel, iSCSI, SAS among others).

Another variation of NVMe is as a back-end for attachment of drives or other NVMe based devices in storage systems, as well as servers. There is also end to end NVMe (e.g. both front-end and back-end) options. Keep context in mind when you hear or talk about NVMe and in particular, NVMe over fabrics, learn more about NVMe at https://storageioblog.com/nvme-place-volatile-memory-express/.

Toshiba announced new RM5 series of high capacity SAS SSDs for replacement of SATA devices in servers. The RM5 series being added to the Toshiba portfolio combine capacity and economics traditional associated with SATA SSDs along with performance as well as connectivity of SAS.

Check out other industry news, comments, trends perspectives here.

Data Infrastructure Server StorageIO Comments Content

Server StorageIO Commentary in the news, tips and articles

Recent Server StorageIO industry trends perspectives commentary in the news.

Via SearchStorage: Comments The storage administrator skills you need to keep up today
Via SearchStorage: Comments Managing storage for IoT data at the enterprise edge
Via SearchCloudComputing: Comments Hybrid cloud deployment demands a change in security mindset

View more Server, Storage and I/O trends and perspectives comments here.

Data Infrastructure Server StorageIOblog posts

Server StorageIOblog Data Infrastructure Posts

Recent and popular Server StorageIOblog posts include:

Announcing Windows Server Summit Virtual Online Event
May 2018 Server StorageIO Data Infrastructure Update Newsletter
Solving Application Server Storage I/O Performance Bottlenecks Webinar
Have you heard about the new CLOUD Act data regulation?
Data Protection Recovery Life Post World Backup Day Pre GDPR
Microsoft Windows Server 2019 Insiders Preview
Which Enterprise HDD for Content Server Platform
Server Storage I/O Benchmark Performance Resource Tools
Introducing Windows Subsystem for Linux WSL Overview
Data Infrastructure Primer Overview (Its Whats Inside The Data Center)
If NVMe is the answer, what are the questions?

View other recent as well as past StorageIOblog posts here

Server StorageIO Recommended Reading (Watching and Listening) List

Software-Defined Data Infrastructure Essentials SDDI SDDC

In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017) available at Amazon.com (check out special sale price), the following are Server StorageIO data infrastructure recommended reading, watching and listening list items. The Server StorageIO data infrastructure recommended reading list includes various IT, Data Infrastructure and related topics including Intel Recommended Reading List (IRRL) for developers is a good resource to check out. Speaking of my books, Didier Van Hoye (@WorkingHardInIt) has a good review over on his site you can view here, also check out the rest of his great content while there.

Watch for more items to be added to the recommended reading list book shelf soon.

Data Infrastructure Server StorageIO event activities

Events and Activities

Recent and upcoming event activities.

July 25, 2018 – Webinar – Data Protect & Storage

June 27, 2018 – Webinar – App Server Performance

June 26, 2018 – Webinar – Cloud App Optimize

May 29, 2018 – Webinar – Microsoft Windows as a Service

April 24, 2018 – Webinar – AWS and on-site, on-premises hybrid data protection

See more webinars and activities on the Server StorageIO Events page here.

Data Infrastructure Server StorageIO Industry Resources and Links

Various useful links and resources:

Data Infrastructure Recommend Reading and watching list
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/downloads – Various presentations and other download material
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

What this all means and wrap-up

Data Infrastructures are what exists inside physical data centers as well as spanning cloud, converged, hyper-converged, virtual, serverless and other software defined as well as legacy environments. NVMe continues to gain in industry adoption as well as customer deployment. Cloud adoption also continues along with multi-cloud deployments. Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter and watch for more NVMe,cloud, data protection among other topics in future posts, articles, events, and newsletters.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Dell Technology World 2018 Announcement Summary

Dell Technology World 2018 Announcement Summary

Dell Technology World 2018 Announcement Summary
This is part one of a five-part series about Dell Technology World 2018 announcement summary. Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw). There were several announcements along with plenty of other activity from sessions, meetings, hallway and event networking taking place at Dell Technology World DTW 2018.

Major data infrastructure technology announcements include:

  • PowerMax all-flash array (AFA) solid state device (SSD) NVMe storage system
  • PowerEdge four-socket 2U and 4U rack servers
  • XtremIO X2 AFA SSD storage system updates
  • PowerEdge MX preview of future composable servers
  • Desktop and thin client along with other VDI updates
  • Cloud and networking enhancements

Besides the above, additional data infrastructure related announcements were made in association with Dell Technology family members including VMware along with other partners, as well as customer awards. Other updates and announcements were tied to business updates from Dell Technology, Dell Technical Capital (venture capital), and, Dell Financial Services.

Dell Technology World Buzzword Bingo Lineup

Some of the buzzword bingo terms, topics, acronyms from Dell Technology World 2018 included AFA, AI, Autonomous, Azure, Bare Metal, Big Data, Blockchain, CI, Cloud, Composable, Compression, Containers, Core, Data Analytics, Dedupe, Dell, DFS (Dell Financial Services), DFR (Data Footprint Reduction), Distributed Ledger, DL, Durability, Fabric, FPGA, GDPR, Gen-Z, GPU, HCI, HDD, HPC, Hybrid, IOP, Kubernetes, Latency, MaaS (Metal as a Service), ML, NFV, NSX, NVMe, NVMeoF, PACE (Performance Availability Capacity Economics), PCIe, Pivotal, PMEM, RAID, RPO, RTO, SAS, SATA, SC, SCM, SDDC, SDS, Socket, SSD, Stamp, TBW (Terabytes Written per day), VDI, venture capital, VMware and VR among others.

Dell Technology World 2018 Venue
Dell Technology World DTW 2018 Event and Venue

Dell Technology World 2018 was located at the combined Palazzo and Venetian hotels along with adjacent Sands Expo center kicking off Monday, April 30th and wrapping up May 4th.

The theme for Dell Technology World DTW 2018 was make it real, which in some ways was interesting given the focus on virtual including virtual reality (VR), software-defined data center (SDDC) virtualization, data infrastructure topics, along with artificial intelligence (AI).

Virtual Sky Dell Technology World 2018
Make it real – Venetian Palazzo St. Mark’s Square on the way to Sands Expo Center

There was plenty of AI, VR, SDDC along with other technologies, tools as well as some fun stuff to do including VR games.

Dell Technology World 2018 Commons Area
Dell Technology World Village Area near Key Note and Expo Halls

Dell Technology World 2018 Commons Area Drones
Dell Technology World Drone Flying Area

During a break from some meetings, I used a few minutes to fly a drone using VR which was interesting. I Have been operating drones (See some videos here) visually without dependence on first-person view (FPV) or relying on extensive autonomous operations instead flying heads up by hand for several years. Needless to say, the VR was interesting, granted encountered a bit of vertigo that I had to get used to.

Dell Technology World 2018 Commons Area Virtual Village
More views of the Dell Technology World Village and Commons Area with VR activity

Dell Technology World 2018 Commons Area Virtual Village
Dell Technology World Village and VR area

Dell Technology World 2018 Commons Area Virtual Village
Dell Technology World Bean Bag Area

Dell Technology World 2018 Announcement Summary

Ok, nuff with the AI, ML, DL, VR fun, time to move on to the business and technology topics of Dell Technologies World 2018.

What was announced at Dell Technology World 2018 included among others:

Dell Technology World 2018 PowerMax
Dell PowerMax Front View

Subsequent posts in this series take a deeper look at the various announcements as well as what they mean.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

On the surface it may appear that there was not much announced at Dell Technology World 2018 particular compared to some of the recent Dell EMC Worlds and EMC Worlds. However turns out that there was a lot announced, granted without some of the entertainment and circus like atmosphere of previous events. Continue reading here Part II Dell Technology World 2018 Modern Data Center Announcement Details in this series, along with Part III here, Part IV here (including PowerEdge MX composable infrastructure leveraging Gen-Z) and Part V (servers and converged) here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part III Dell Technology World 2018 Storage Announcement Details

Part III Dell Technology World 2018 Storage Announcement Details

Part III Dell Technology World 2018 Storage Announcement Details

This is Part III Dell Technology World 2018 Storage Announcement Details that is part of a five-post series (view part I here, part II here, part IV (PowerEdge MX Composable) here and part V here). Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw).

Dell Technology World 2018 Storage Announcements Include:

  • PowerMax – Enterprise class tier 0 and tier 1 all-flash array (AFA)
  • XtremIO X2 – Native replication and new entry-level pricing

Dell Technology World 2018 PowerMax back view
Back view of Dell PowerMax

Dell PowerMax Something Old, Something New, Something Fast Near You Soon

PowerMax is the new companion to VMAX. Positioned for traditional tier 0 and tier 1 enterprise-class applications and workloads, PowerMax is optimized for dense server virtualization and SDDC, SAP, Oracle, SQL Server along with other low-latency, high-performance database activity. Different target workloads include Mainframe as well as Open Systems, AI, ML, DL, Big Data, as well as consolidation.

The Dell PowerMax is an all-flash array (AFA) architecture with an end to end NVMe along with built-in AI and ML technology. Building on the architecture of Dell EMC VMAX (some models still available) with new faster processors, full end to end NVMe ready (e.g., front-end server attachment, back-end devices).

The AI and ML features of PowerMax PowerMaxOS include an engine (software) that learns and makes autonomous storage management decisions, as well as implementations including tiering. Other AI and ML enabled operations include performance optimizations based on I/O pattern recognition.

Other features of PowerMax besides increased speeds, feeds, performance includes data footprint reduction (DFR) inline deduplication along with enhanced compression. The DFR benefits include up to 5:1 data reduction for space efficiency, without performance impact to boost performance effectiveness. The DFR along with improved 2x rack density, along with up to 40% power savings (your results may vary) based on Dell claims to enable an impressive amount of performance, availability, capacity, economics (e.g., PACE) in a given number of cubic feet (or meters).

There are two PowerMax models including 2000 (scales from 1 to 2 redundant controllers) and 8000 (scales from 1 to 8 redundant controller nodes). Note that controller nodes are Intel Xeon multi-socket, multi-core processors enabling scale-up and scale-out performance, availability, and capacity. Competitors of the PowerMax include AFA solutions from HPE 3PAR, NetApp, and Pure Storage among others.

Dell Technology World 2018 PowerMax Front View
Front view of Dell PowerMax

Besides resiliency, data services along with data protection, Dell is claiming PowerMax is 2x faster than their nearest high-end storage system competitors with up to 150GB/sec (e.g., 1,200Gbps) of bandwidth, as well as up to 10 million IOPS with 50% lower latency compared to previous VMAX.

PowerMax is also a full end to end NVMe ready (both back-end and front-end). Back-end includes NVMe drives, devices, shelves, and enclosures) as well as front-end (future NVMe over Fabrics, e.g., NVMeoF). Being NVMeoF ready enables PowerMax to support future front-end server network connectivity options to traditional SAN Fibre Channel (FC), iSCSI among others.

PowerMax is also ready for new, emerging high speed, low-latency storage class memory (SCM).  SCM is the next generation of persistent memories (PMEM) having performance closer to traditional DRAM while persistence of flash SSD. Examples of SCM technologies entering the market include Intel Optane based on 3D XPoint, along with others such as those from Everspin among others.

IBM Z Zed Mainframe at Dell Technology World 2018
An IBM “Zed” Mainframe (in case you have never seen one)

Based on the performance claims, the Dell PowerMax has an interesting if not potentially industry leading power, performance, availability, capacity, economic footprint per cubic foot (or meter). It will be interesting to see some third-party validation or audits of Dell claims. Likewise, I look forward to seeing some real-world applied workloads of Dell PowerMax vs. other storage systems. Here are some additional perspectives Via SearchStorage: Dell EMC all-flash PowerMax replaces VMAX, injects NVMe


Dell PowerMax Visual Studio (Image via Dell.com)

To help with customer decision making, Dell has created an interactive VMAX and PowerMax configuration studio that you can use to try out as well as learn about different options here. View more Dell PowerMax speeds, feeds, slots, watts, features and functions here (PDF).

Dell Technology World 2018 XtremIO X2

XtremIO X2

Dell XtremIO X2 and XIOS 6.1 operating system (software-defined storage) enhanced with native replication across wide area networks (WAN). The new WAN replication is metadata-aware native to the XtremIO X2 that implements data footprint reduction (DFR) technology reducing the amount of data sent over network connections. The benefit is more data moved in a given amount of time along with better data protection requiring less time (and network) by only moving unique changed data.

Dell Technology World 2018 XtremIO X2 back view
Back View of XtremIO X2

Dell EMC claims to reduce WAN network bandwidth by up to 75% utilizing the new native XtremIO X2 native asynchronous replication. Also, Dell says XtremIO X2 requires up to 38% less storage space at disaster recovery and business resiliency locations while maintaining predictable recovery point objectives (RPO) of 30 seconds. Another XtremIO X2 announcement is a new entry model for customers at up to 55% lower cost than previous product generations. View more information about Dell XtremIO X2 here, along with speeds feeds here, here, as well as here.

What about Dell Midrange Storage Unity and SC?

Here are some perspectives Via SearchStorage: Dell EMC midrange storage keeps its overlapping arrays.

Dell Bulk and Elastic Cloud Storage (ECS)

One of the questions I had going into Dell Technology World 2018 was what is the status of ECS (and its predecessors Atmos as well as Centera) bulk object storage is given lack of messaging and news around it. Specifically, my concern was that if ECS is the platform for storing and managing data to be preserved for the future, what is the current status, state as well as future of ECS.

In conversations with the Dell ECS folks, ECS which has encompassed Centera functionality and it (ECS) is very much alive, stay tuned for more updates. Also, note that Centera has been EOL. However, its feature functionality has been absorbed by ECS meaning that data preserved can now be managed by ECS. While I can not divulge the details of some meeting discussions, I can say that I am comfortable (for now) with the future directions of ECS along with the data it manages, stay tuned for updates.

Dell Data Protection

What about Data Protection? Security was mentioned in several different contexts during Dell Technology World 2018, as was a strong physical security presence seen at the Palazzo and Sands venues. Likewise, there was a data protection presence at Dell Technologies World 2018 in the expo hall, as well as with various sessions.

What was heard was mainly around data protection management tools, hybrid, as well as data protection appliances and data domain-based solutions. Perhaps we will hear more from Dell Technologies World in the future about data protection related topics.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

If there was any doubt about would Dell keep EMC storage progressing forward, the above announcements help to show some examples of what they are doing. On the other hand, lets stay tuned to see what news and updates appear in the future pertaining to mid-range storage (e.g. Unity and SC) as well as Isilon, ScaleIO, Data Protection platforms as well as software among other technologies.

Continue reading part IV (PowerEdge MX Composable and Gen-Z) here in this series, as well as part I here, part II here, part IV (PowerEdge MX Composable) here, and, part V here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure

Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure

Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure
This is Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure that is part of a five-post series (view part I here, part II here, part III here and part V here). Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw).

Introducing PowerEdge MX Composable Infrastructure (the other CI)

Dell announced at Dell Technology World 2018 a preview of the new PowerEdge MX (kinetic) family of data infrastructure resource servers. PowerEdge MX is being developed to meet the needs of resource-centric data infrastructures that require scalability, as well as performance availability, capacity, economic (PACE) flexibility for diverse workloads. Read more about Dell PowerEdge MX, Gen-Z and composable infrastructures (the other CI) here.

Some of the workloads being targeted by PowerEdge MX include large-scale dense SDDC virtualization (and containers), private (or public clouds by service providers). Other workloads include AI, ML, DL, data analytics, HPC, SC, big data, in-memory database, software-defined storage (SDS), software-defined networking (SDN), network function virtualization (NFV) among others.

The new PowerEdge MX previewed will be announced later in 2018 featuring a flexible, decomposable, as well as composable architecture that enables resources to be disaggregated and reassigned or aggregated to meet particular needs (e.g., defined or composed). Instead of traditional software defined virtualization carving up servers in smaller virtual machines or containers to meet workload needs, PowerEdge MX is part of a next-generation approach to enable server resources to be leveraged at a finer granularity.

For example, today an entire server including all of its sockets, cores, memory, PCIe devices among other resources get allocated and defined for use. A server gets defined for use by an operating system when bare metal (or Metal as a Service) or a hypervisor. PowerEdge MX (and other platforms expected to enter the market) have a finer granularity where with a proper upper layer (or higher altitude) software resources can be allocated and defined to meet different needs.

What this means is the potential to allocate resources to a given server with more granularity and flexibility, as well as combine multiple server’s resources to create what appears to be a more massive server. There are vendors in the market who have been working on and enabling this type of approach for several years ranging from ScaleMP to startup Liqid and Tidal among others. However, at the heart of the Dell PowerEdge MX is the new emerging Gen-Z technology.

If you are not familiar with Gen-Z, add it to your buzzword bingo lineup and learn about it as it is coming your way. A brief overview of Gen-Z consortium and Gen-Z material and primer information here. A common question is if Gen-Z is a replacement for PCIe which for now is that they will coexist and complement each other. Another common question is if Gen-Z will replace Ethernet and InfiniBand and the answer is for now they complement each other. Another question is if Gen-Z will replace Intel Quick Path and another CPU device and memory interconnects and the answer is potentially, and in my opinion, watch to see how long Intel drags its feet.

Note that composability is another way of saying defined without saying defined, something to pay attention too as well as have some vendor fun with. Also, note that Dell is referent to PowerEdge MX and Kinetic architecture which is not the same as the Seagate Kinetic Ethernet-based object key value accessed drive initiative from a few years ago (learn more about Seagate Kinetic here). Learn more about Gen-Z and what Dell is doing here.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Dell has provided a glimpse of what they are working on pertaining composable infrastructure, the other CI, as well as Gen-Z and related next generation of servers with PowerEdge MX as well as Kinetic. Stay tuned for more about Gen-Z and composable infrastructures. Continue reading Part V (servers converged) in this series here, as well as part I here, part II here and part III here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary

VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary

VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary

VMware announced last week vSphere vSAN vCenter version 6.7 among other updates for their software-defined data center (SDDC) and software-defined infrastructure (SDI) solutions. The new April v6.7 announcement updates followed those from this past March when VMware announced cloud enhancements with partner AWS (more on that announcement here).

VMware vSphere 6.7
VMware vSphere Web Client with vSphere 6.7

For those looking for a more extended version with a closer look and analysis of what VMware announced click here for part two and part three here.

What VMware announced is general availability (GA) meaning you can now download from here the bits (e.g., software) that include:

  • ESXi aka vSphere 6.7 hypervisor build 8169922
  • vCenter Server 6.7 build 8217866
  • vCenter Server Appliance 6.7 build 8217866
  • vSAN 6.7 and other related SDDC management tools
  • vSphere Operations Management (vROps) 6.7
  • Increased the speeds, feeds and other configuration maximum limits

For those not sure or need a refresher, vCenter Server is the software for extended management across multiple vSphere ESXi hypervisors that run on a Windows platform.

Major themes of the VMware April announcement is around increased scalability along with performance enhancements, ease of use, security, as well as extended application support. As part of the v6.7 improvements, VMware is focusing on simplifying, as well as accelerating software-defined data infrastructure along with other SDDC lifecycle operation activities.

Extended application support includes for traditional demanding enterprise IT, along with High-Performance Compute (HPC), Big Data, Little Data, Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL), as well as other emerging workloads. Part of supporting demanding workloads includes enhanced support for Graphical Processing Units (GPU) such as those from Nvidia among others.

What Happened to vSphere 6.6?

A question that comes up is that there is a vSphere 6.5 (and its smaller point releases) and now vSphere 6.7 (along with vCenter, vSAN among others). What happened to vSphere 6.6? Good question and not sure what the real or virtual answer from VMware is or would be. My take is that this is a good opportunity for VMware to align their versions of principal components (e.g., vSphere/ESXi, vCenter, vSAN) to a standard or unified numbering scheme.

Where to learn more

Learn more about VMware vSphere, vCenter, vSAN and related software-defined data center (SDDC); software-defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Overall the VMware vSphere vSAN vCenter version 6.7 enhancements are a good evolution of their core technologies for enabling hybrid, converged software-defined data infrastructures and software-defined data centers. Continue reading more about  VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary here in part II (focus on management, vCenter plus security) and part III here (focus on server storage I/O and deployment) of this three-part series.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

VMware vSphere vSAN vCenter v6.7 SDDC details

VMware vSphere vSAN vCenter v6.7 SDDC details

VMware vSphere vSAN vCenter v6.7 SDDC details

VMware vSphere vSAN vCenter v6.7 SDDC details of announcement summary focus on vCenter, Security, and management. This is part two (part one here) of a three-part (part III here) series looking at VMware vSphere vSAN vCenter v6.7 SDDC details of announcement summary.

Last week VMware announced vSphere vSAN vCenter v6.7 updates as part of enhancing their software-defined data center (SDDC) and software-defined infrastructure (SDI) solutions core components. This is an expanded post as a companion to the Server StorageIO summary piece here. These April updates followed those from this past March when VMware announced cloud enhancements with partner AWS (more on that announcement here).

VMware vSphere 6.7
VMware vSphere Web Client with vSphere 6.7

What VMware announced is generally available (GA) meaning you can now download from here the bits (e.g., software) that include:

  • ESXi aka vSphere 6.7 hypervisor build 8169922
  • vCenter Server 6.7 build 8217866
  • vCenter Server Appliance 6.7 build 8217866
  • vSAN 6.7 and other related SDDC management tools
  • vSphere Operations Management (vROps) 6.7

For those not sure or need a refresher, vCenter Server is the software for extended management across multiple vSphere ESXi hypervisors that run on a Windows platform.

Major themes of the VMware April announcements are focused around:

  • Increased enterprise and hybrid cloud scalability
  • Resiliency, availability, durable and secure
  • Performance, efficiency and elastic
  • Intuitive, simplified management at scale
  • Expanded support for demanding application workloads

Expanded application support includes for traditional demanding enterprise IT, along with High-Performance Compute (HPC), Big Data, Little Data, Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL), as well as other emerging workloads. Part of supporting demanding workloads includes enhanced support for Graphical Processing Units (GPU)such as those from Nvidia among others.

What was announced

As mentioned above and in other posts in this series, VMware announced new versions of their ESXi hypervisor vSphere v6.7, as well as virtual SAN (vSAN) v6.7, virtual Center (vCenter),  v6.7 among other related tools. One of the themes of this announcement by VMware includes hybrid SDDC spanning on-site, on-premises (or on-premisess if you prefer) to the public cloud. Other topics involve increasing scalability, along with stability as well as ease of management along with security, performance updates.

As part of the v6.7 enhancements, VMware is focusing on simplifying, as well as accelerating software-defined data infrastructure along with other SDDC lifecycle operation activities. Additional themes and features focus on server, storage, I/O resource enablement, as well as application extensibility support.

vSphere ESXi hypervisor

With v6.7 ESXi host maintenance times improved with single reboot vs. previous multiple boots for some upgrades, as well as quick boot. Quick boot enables restarting the ESXi hypervisor without rebooting the physical machine skipping time-consuming hardware initialization.

Enhanced HTML5 based vSphere client GUI (along with API and CLI) with increased feature function parity compared to predecessor versions and other VMware tools. Increased functionality includes NSX, vSAN and VMware Upgrade Management (VUM) capabilities among others. In other words, not only are new technologies support, functions you may have in the past resisted using the web-based interfaces due to extensibility are being addressed with this release.

vCenter Server and vCenter Server Appliance (VCSA)

VMware has announced that moving forward the hosted (e.g., running on a Windows server platform) version is being depreciated. What this means is that it is time for those not already doing so to migrate to the vCenter Server Appliance (VCSA). As a refresher, VCSA is a turnkey software-defined virtual appliance that includes vCenter Server software running on VMware Photon Linux operating system as a virtual machine. VMware vCenter.

As part of the update, the enhanced vCenter Server Appliance (VCSA) supports new efficient, effective API management along with multiple vCenters as well as performance improvements. VMware cites 2x faster vCenter operations per second, 3x reduction in memory usage along with 3x quicker Distributed Resource Scheduler (DRS) related activities across powered on VMs).

What this means is that VCSA is a self-contained virtual appliance that can be configured for very large, large, medium and small environments in various configurations. With v6.7 vCenter Server Appliance emphasis on scaling, as well as performance along with security and ease of use features, VCSA is better positioned to support large enterprise deployments along with hybrid cloud. VCSA v6.7 is more than just a UI enhancement with v6.5 shown below followed by an image of v6.7 UI.

VMware vSphere 6.5
VMware vCenter Appliance v6.5 main UI

VMware vSphere 6.7
VMware vCenter Appliance v6.7 main UI

Besides UI enhancements (along with API and CLI) for vCenter, other updates include more robust data protection (aka backup) capability for the vCenter Server environment. In the prior v6.5 version there was a fundamental capability to specify a destination for sending vCenter configuration information to for backup data protection (See image below).

vCenter 6.5 backup
VMware vCenter Appliance 6.5 backup

Note that the VCSA backup only provides data protection for the vCenter Appliance, its configuration, settings along with data collected of the VMware hosts (and VMs) being managed. VCSA backup does not provide data protection of the individual VMware hosts or VMs which is accomplished via other data protection techniques, tools and technologies.

In v6.7 vCenter now has enhanced capabilities (shown below) for enabling data protection of configuration, settings, performance and other metrics. What this means is that with improved UI it is now possible to setup backup schedules as part of enabling automation for data protection of vCenter servers.

vCenter 6.7 backup
VMware VCSA v6.7 enhanced UI and data protection aka backup

The following shows some of the configuration sizing options as part of VCSA deployment. Note that the vCPU, Memory, and Storage are for the VCSA itself to support a given number of VMware hosts (e.g., physical machines) as well as guest virtual machines (VM).

 

VCSA

VCSA

VCSA

VM

 

Size

vCPU

Memory

Storage

Hosts

VMs

Tiny

2

10GB

300GB

10

100

Small

4

16GB

340GB

100

1000

Medium

8GB

24

525GB

400

4000

Large

16

32GB

740GB

1000

10000

Extra Large

24

48GB

1180GB

2000

35000

vCenter 6.7 sizing and number of the physical machine (e.g., VM hosts) and virtual machines supported

Keep in mind that in addition to the above individual VCSA configuration limits, multiple vCenters can be grouped including linked mode spanning onsite, on-premisess (on-prem if you prefer) as well as the cloud. VMware vCenter server hybrid linked mode enables seamless visibility and insight across on-site, on-premises (or on-premisess if you prefer) as well as public clouds such as AWS among others.

In other words, vCenter with hybrid linked mode enables you to have situational awareness and avoid flying blind in and among clouds. As part of hybrid vCenter environment support, cross-cloud (public, private) hot and cold migration including clone as well as vMotion across mixed VMware version provisioning is supported. Using linked mode multiple roles, permissions, tags, policies can be managed across different groups (e.g., unified management) as well as locations.

VMware and vSphere Security

Security is a big push for VMware with this release including Trusted Platform Module (TPM) 2.0 along with Virtual TPM 2.0 for protecting both the hypervisors and guest operating systems. Data encryption was introduced in vSphere 6.5 and is enhanced with increased management simplicities along with protection of data at rest and in flight (while in motion).

In other words, encrypted vMotion across different vCenter instances and versions are supported, as well as across hybrid environments (e.g., on-premises and public cloud). Other security enhancements include tighter collaboration and integration with Microsoft for Windows VMs, as well as vSAN, NSX and vRealize for a secure software-defined data infrastructure aka SDDC. For example, VMware has enhanced support for Microsoft Virtualization Based Security (VBS) including credential Guard where vSphere is providing a secure virtual hardware platform.

Additional VMware 6.7 security enhancements include Multiple SYSLOG targets, FIPS 140-2 Validated modules. Note that there is a difference between FIPS certified and FIPS validated, of which VMware vCenter and ESXi leverage two modules (VM Kernel Cryptographic, and OpenSSL) are currently validated. VMware is not playing games like some vendors when it comes to disclosing FIPS 140-2 validated vs. certified. Other VMware security enhancements include

Note, when a vendor mentions FIPS 140-2 and imply or says certified, ask them if they indeed are certified. Any vendor who is actually FIPS 140-2 certified should not get upset if you press them politely. Instead, they should thank you for asking. Otoh, if a vendor gives you a used car salesperson style dance or get upset, ask them why so sensitive, or, perhaps, what are they ashamed of or hiding, just saying. Learn more here.

vRealize Operations Manager (vROps)

vRealize Operations Manager (vROps) v6.7 dashboard for vSphere client plugin provides an overview of cluster view and alerts of both vCenter and vSAN. What this means is that you will want to upgrade vROps to v6.7. The vROps benefit being dashboards for optimal performance, capacity, troubleshooting, and management configuration.

Where to learn more

Learn more about VMware vSphere, vCenter, vSAN and related software-defined data center (SDDC); software-defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

VMware continues to enhance their core SDDC data infrastructure resources to support new and emerging, as well as legacy enterprise applications at scale. VMware enhancements include management, security along with other updates to support the demanding needs of various applications and workloads, along with supporting application developers.

Some examples of demanding workloads include among others AL, Big Data, Machine Learning, In memory and high-performance compute (HPC) among other resource-intensive new workloads, as well as existing applications. This includes enhanced support for Nvidia physical and virtual Graphical Processing Units (GPU) that are used in support for compute-intensive graphics, as well as non-graphic processing (e.g., AI, ML) workloads.

With the v6.7 announcements, VMware is providing proof points that they are continuing to invest in their core SDDC enabling technologies. VMware is also demonstrating the evolution of vSphere ESXi hypervisor along with associated management tools for hybrid environments with ease of use management at scale, along with security.  View more about VMware vSphere vSAN vCenter v6.7 SDDC details in part three of this three-part series here ((focus on server storage I/O, deployment information and analysis).

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

VMware vSphere vSAN vCenter Server Storage I/O Enhancements

VMware vSphere vSAN vCenter Server Storage I/O Enhancements

VMware vSphere vSAN vCenter Server Storage I/O Enhancements

This is part three of a three-part series looking at last weeks v6.7 VMware vSphere vSAN vCenter Server Storage I/O Enhancements. The focus of this post is on server, storage, I/O along with deployment and other wrap up items. In case you missed them, read part one here, and part two here.

VMware as part of updates to, vSAN and vCenter introduced several server storage I/O enhancements some of which have already been mentioned.

VMware vSphere 6.7
VMware vSphere Web Client with vSphere 6.7

Server Storage I/O enhancements for vSphere, vSAN, and vCenter include:

  • Native 4K (4kn) block sector size for HDD and SSD devices
  • Intel Volume Management Device (VMD) for NVMe flash SSD
  • Support for Persistent Memory (PMEM) aka Storage Class Memory (SCM)
  • SCSI UNMAP (similar to TRIM) for SSD space reclamation
  • XCOPY and VAAI enhancements
  • VMFS-5 is now default file system
  • VMFS-6 SESparse vSphere snapshot space reclamation
  • VVOL supporting SCSI-3 persistent reservations and IPv6
  • Reduce dependences on RDMs with VVOL enhancements
  • Software-based Fibre Channel over Ethernet (FCoE) initiator
  • Para Virtualized RDMA (PV-RDMA)
  • Various speeds and feeds enhancements

VMware vSphere 6.7 also adds native 4KN sector size (e.g., 4096 block size) in addition to traditional native and emulated 512-byte sectors for HDD as well as SSD. The larger block size means performance improvements along with better storage allocation for applications, particularly for large capacity devices. Other server storage I/O updates include RDMA over Converged Ethernet (RoCE) enabled Remote Direct Memory Access (RDMA) as well as Intel VMD for NVMe. Learn more about NVMe here.

Other storage-related enhancements include SCSI UNMAP (e.g., SCSI equivalent of SSD TRIM) with the selectable priority of none or low for SSD space reclamation. Also enhanced are SESparse of vSphere snapshot virtual disk space reclamation (for VMFS-6). VMware XCOPY (Extended Copy) now works with vendor-specific VMware API for Array Integration (VAAI) primitives along with SCSI T10 standard used for cloning, zeroing and copy offload to storage systems. Virtual Volumes (VVOL) have been enhanced to support IPv6 and SCSI-3 persistent reservations to help reduce dependency or use of RDMs.

VMware configuration maximums (e.g., speeds and feeds) including server storage I/O enhancements including boosting from 512 to 1024 LUNs per host. Other speeds and feeds improvements include going from 2048 to 4096  server storage I/O paths per host, PVSCSI adapters now support up to 256 disks vs. 64 (virtual disks or Raw Device Mapped aka RDM). Also note that VMFS-3 is now the end of life (EOL) and will be automatically upgraded to VMFS-5 during the upgrade to vSphere 6.7, while the default datastore type is VMFS-6.

Additional server storage I/O enhancements include RoCE for RDMA enabling low latency server to server memory-based data movement access, along with Para-virtualized RDMA (PV-RDMA) on Linux guest OS. ESXi has been enhanced with iSER (iSCSI Extension for RDMA) leveraging faster server I/O interconnects and CPU offload. Another server storage I/O enhancement is Software based Fibre Channel over Ethernet (e.g., SW-FCoE) initiator using loss less Ethernet fabrics.

Note as a reminder or refresher that VMware also has para (e.g., virtualization-optimized) drivers for Ethernet and other networks, NVMe as well as SCSI in addition to standard devices. For example, you can access from a VM an NVMe backed datastore using standard VMware SATA, SCSI Controller, LSI Logic SAS, LSI Logic Parallel, VMware Paravirtual, native NVMe driver (virtual machine type 6.5 or higher) for better performance. Likewise, instead of using the standard SAS and SCSI VM devices, the VMware para-virtualized

Besides the previously mentioned items, other enhancements including for vSAN include support for logical clusters such as Oracle RAC, Microsoft SQL Server Availability Groups, Microsoft Exchange Data Availability Groups as well as Windows Server Failover Clusters (WSFC) using vSAN iSCSI service. Note that as a proof point of continued vSAN deployment customer adoption, VMware is claiming 10,000 deployments. For performance, vSAN enhancement also includes updates for adaptive placement, adaptive resync, as well as faster cache destage. The benefit of quicker destage is that cache can be drained or written to disk to eliminate or prevent I/O bottlenecks.

As part of supporting expanding, more demanding enterprise among other workloads, vSAN enhancements also include resiliency updates, physical resource and configuration checks, health and monitoring checks. Other vSAN improvements include streamlined workflows, converged management views across vCenter as well as vRealize tools. Read more from VMware about server storage I/O enhancements to vSphere, vSAN, and vCenter here.

VMware Server Storage I/O Memory Matters

VMware is also joining others with support for evolving persistent memory (PMEM) leveraging so-called storage class memories (SCM). Note, some refer to SCM as persistent memory as PM, however, context needs to be used as PM also means Physical Machine, Physical Memory, Primary Memory among others. With the new PMEM support for server memory, VMware is laying the foundation for guest operating systems as well as applications to leverage the technology.

For example, Microsoft with Windows Server 2016 supports SCMs as a block addressable storage medium and file system, as well as for Direct Access (e.g., DAX). What this means is that fast file systems can be backed by persistent faster than traditional SSD storage, as well as applications such as SQL Server that support DAX can do direct persistent I/O.

As a refresher, Non-Volatile DIMM enable server memory by combing traditional DRAM with some persistent storage class memory. By combing DRAM and storage class memory (SCM) also known as PMEM servers can use the RAM as a fast read/write memory, with the data destaged to persistent memory. Examples of SCM include Micron 3D Xpoint also known as Intel Optane along with others such as Everspin NVDIMM among others (available from Dell, HPE among others. Learn more SSD and storage class memories (SCM) along with PMEM here, as well as NVMe here.

Deployment, be prepared before you grab the bits and install the software

For those of you who want or need to download the bits here is a link to VMware software download. However, before racing off to install the new software in your production (or perhaps even lab), do your homework. Read the important information from VMware before upgrading to vSphere here (e.g., KB53704) as well as release notes, and review VMware’s best practices for upgrading to vCenter here.

Some of the things to be aware of including upgrade order and dependencies, as well as make sure you have good current backups of your vSphere ESXi configuration, vCenter appliance. In addition to viewing the vSphere ESXi and vCenter 6.7 release notes here, also.

There are some hardware compatibility items you need to be aware of, both for this as well as future versions. Check out the VMware hardware (and software) compatibility list (HCL), along with partner product interoperability matrices, as well as release notes. Pay attention to devices depreciated and no longer supported in ESXi 6.7 (e.g., VMware KB52583) as well as those that may not work in future releases to avoid surprises.

Where to learn more

Learn more about VMware vSphere, vCenter, vSAN and related software-defined data center (SDDC); software-defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

In case you missed them, read part one here and click here for part two of this series.

Some will say what’s the big deal why all the noise, coverage and discussion for a point release?

My view is that this is a big evolutionary package of upgrade enhancements and new features, even if a so-called point release (e.g., going from 6.5 to 6.7). Some vendors might have done this type of updates as a significant, e.g., version 6.x to 7.x upgrade to make more noise, get increased coverage or merely enhance the appearance of software maturity (e.g., V1.x to V2.x to V3.x, and so forth).

In the case of VMware, what some might refer to point release that is smaller, are the ones such as vSphere 6.5.0 to 6.5.x among others. Thus, there is a lot in this package of updates from VMware and good to see continued enhancements.

I also think that VMware is getting challenges from different fronts including Microsoft as well as cloud partners among others which is good. The reason I believe that it is okay VMware is being challenged is given their history; they tend to step up their game playing harder as well as stronger with the competition.

VMware is continuing to invest and extend its core SDDC technologies to meet the expanding demands of various organizations, from small to ultra large enterprises. What this means is that VMware is addressing ease of use for smaller, as well as removing complexity to enable simplified scaling from on-site (or on-premises and on-prem if you prefer) to the public cloud.

Overall the VMware Announced version 6.7 of vSphere vSAN vCenter SDDC core components are a useful extension of their existing technology. VMware Announced release 6.7 of vSphere vSAN vCenter SDDC core components enhancements enable customers more flexibility, scalability, resiliency, and security to meet their various needs.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Have you heard about the new CLOUD Act data regulation?

Have you heard about the new CLOUD Act data regulation?

new CLOUD Act data regulation

Have you heard about the new CLOUD Act data regulation?

The new CLOUD Act data regulation became law as part of the recent $1.3 Trillion (USD) omnibus U.S. government budget spending bill passed by Congress on March 23, 2018 and signed by President of the U.S. (POTUS) Donald Trump in March.

CLOUD Act is the acronym for Clarifying Lawful Overseas Use of Data, not to be confused with initiatives such as U.S. federal governments CLOUD First among others which are focused on using cloud, securing and complying (e.g. FedRAMP among others). In other words, the new CLOUD Act data regulation pertains to how data stored by cloud or other service providers can be accessed by law environment officials (LEO).

U.S. Supreme court
Supreme Court of the U.S. (SCOTUS) Image via https://www.supremecourt.gov/

CLOUD Act background and Stored Communications Act

After the signing into law of CLOUD Act, the US Department of Justice (DOJ) has asked the Supreme Court of the U.S. (SCOTUS) to dismiss the pending case against Microsoft (e.g., Azure Cloud). The case or question in front of SCOTUS pertained to whether LEO can search as well as seize information or data that is stored overseas or in foreign counties.

As a refresher, or if you had not heard, SCOTUS was asked to resolve if a service provider who is responding to a warrant based on probable cause under the 1986 era Stored Communications Act, is required to provide data in its custody, control or possession, regardless of if stored inside, or, outside the US.

Microsoft Azure Regions and software defined data infrastructures
Microsoft Azure Regions via Microsoft.com

This particular case in front of SCOTUS centered on whether Microsoft (a U.S. Technology firm) had to comply with a court order to produce emails (as part of an LEO drug investigation) even if those were stored outside of the US. In this particular situation, the emails were alleged to have been stored in a Microsoft Azure Cloud Dublin Ireland data center.

For its part, Microsoft senior attorney Hasan Ali said via FCW “This bill is a significant step forward in the larger global debate on what our privacy laws should look like, even if it does not go to the highest threshold". Here are some additional perspectives via Microsoft Brad Smith on his blog along with a video.

What is CLOUD Act

Clarifying Lawful Overseas Use of Data is the new CLOUD Act data regulation approved by Congress (House and Senate) details can be read here and here respectively with additional perspectives here.

The new CLOUD Act law allows for POTUS to enter into executive agreements with foreign governments about data on criminal suspects. Granted what is or is not a crime in a given country will likely open Pandora’s box of issues. For example, in the case of Microsoft, if an agreement between the U.S. and Ireland were in place, and, Ireland agreed to release the data, it could then be accessed.

Now, for some who might be hyperventilating after reading the last sentence, keep this in mind that if you are overseas, it is up to your government to protect your privacy. The foreign government must have an agreement in place with the U.S. and that a crime has or had been committed, a crime that both parties concur with.

Also, keep in mind that is also appeal processes for providers including that the customer is not a U.S. person and does not reside in the U.S. and the disclosure would put the provider at risk of violating foreign law. Also, keep in mind that various provisions must be met before a cloud or service provider has to hand over your data regardless of what country you reside, or where the data resides.

Where to learn more

Learn more about CLOUD Act, cloud, data protection, world backup day, recovery, restoration, GDPR along with related data infrastructure topics for cloud, legacy and other software defined environments via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Is the new CLOUD Act data regulation unique to Microsoft Azure Cloud?

No, it also applies to Amazon Web Services (AWS), Google, IBM Softlayer Cloud, Facebook, LinkedIn, Twitter and the long list of other service providers.

What about GDPR?

Keep in mind that the new Global Data Protection Regulations (GDPR) go into effect May 25, 2018, that while based out of the European Union (EU), have global applicability across organizations of all size, scope, and type. Learn more about GDPR, Data Protection and its global impact here.

Thus, if you have not heard about the new CLOUD Act data regulation, now is the time to become aware of it.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

AWS Cloud Application Data Protection Webinar

AWS Cloud Application Data Protection Webinar

AWS Cloud Application Data Protection Webinar trends

AWS Cloud Application Data Protection Webinar
Date: Tuesday, April 24, 2018 at 11:00am PT / 2:00pm ET

Only YOU can prevent data loss for on-premises, Amazon Web Service (AWS) based cloud, and hybrid applications.

Join me in this free AWS Cloud Application Data Protection Webinar (registration required) sponsored by Veeam produced by Redmond Magazine as we explore issues, trends, tools, best practices and techniques for enabling data protection with AWS technologies.

Hyper-V Disaster Recovery SDDC Data Infrastructure Data Protection

Attend and learn about:

  • Application-aware point in time snapshot data protection
  • Protecting AWS EC2 and on-premises applications (and data)
  • Leveraging AWS for data protection and recovery
  • And much more

Register for the live event or catch the replay here.

Where to learn more

Learn more about data protection, software defined data center (SDDC), software defined data infrastructures (SDDI), AWS, cloud and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

You can not go forward if you can not go back to a particular point in time (e.g. recovery point objective or RPO). Likewise, if you can not go back to a given RPO, how can you go forward with your business as well as meet your recovery time objective (RTO)? Join us for the live conversation or replay by registering (free) here to learn how to enable AWS Cloud Application Data Protection Webinar, as well as using AWS S3 for on-site, on-premises data protection.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

March 2018 Server StorageIO Data Infrastructure Update Newsletter

March 2018 Server StorageIO Data Infrastructure Update Newsletter

Server and StorageIO Update Newsletter

Volume 18, Issue 3 (March 2018)

Hello and welcome to the March 2018 Server StorageIO Data Infrastructure Update Newsletter.

If you are wondering where the January and February 2018 update newsletters are, they are rolled into this combined edition. In addition to the short email version (free signup here), you can access full versions (html here and PDF here) along with previous editions here.

In this issue:

Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

Data Infrastructure Data Protection and Backup BC BR DR HA Security

World Backup day is coming up on March 31 which is a good time to remember to verify and validate that your data protection is working as intended. On one hand I think it is a good idea to call out the importance of making sure your data is protected including backed up.

On the other hand data protection is not a once a year, rather a year around, 7 x 24 x 365 day focus. Also the focus needs to be on more than just backup, rather, all aspects of data protection from archiving to business continuance (BC), business resiliency (BR), disaster recovery (DR), always on, always accessible, along with security and recovery.

Data Infrastructure Data Protection Backup 4 3 2 1 rule
Data Infrastructure 4 3 2 1 Data Protection and Backup

Some data spring thoughts, perspectives and reminders. Data lakes may swell beyond their banks causing rivers of data to flood as they flow into larger reservoirs, great data lakes, gulfs of data, seas and oceans of data. Granted, some of that data will be inactive cold parked like glaciers while others semi-active floating around like icebergs. Hopefully your data is stored on durable storage solutions or services and does not melt.

Data Infrastructure Server Storage I/O flash SSD NVMe
Various NAND Flash SSD devices and SAS, SATA, NVMe, M.2 interfaces

Non-Volatile Memory (NVM) including various solid state device (SSD) mediums (e.g. nand flash, 3D XPoint, MRAM among others), packaging (drives, PCIe Add in cars [AiC] along with entire systems, appliances or arrays). Also part of the continue evolution of NVM, SSD and other persistent memories (PM) including storage class memories (SCM) are different access protocol interfaces.

Keep in mind that there is a difference between NVM (medium) and NVMe (access), NVM is the generic category of mediums or media and devices such as nand flash, nvram, 3D XPoint among others SCM (and PMs). In other words, NVM is what data devices use for storing data, NVMe is how devices and systems are accessed. NVMe and its variations is how NVM, SSD, PM, SCM media and devices get accessed locally, as well as over network fabrics (e.g. NVMe-oF an FC-NVMe).

NVMe continues to evolve including with networked fabric variations such as RDMA based NVMe over Fabric (NVMe-oF), along with Fibre Channel based (FC-NVMe). The Fibre Channel Industry Association trade group recently held its second multi-vendor plugfest in support of NVMe over Fibre Channel.

Read more about NVM, NVMe, SSD, SCM, flash and related technologies, tools, trends, tips via the following resources:

Has Object Storage failed to live up to its industry hype lacking traction? Or, is object storage (also known as blobs) progressing with customer adoption and deployment on normal realistic timelines? Recently I have seen some industry comments about object storage not catching on with customers or failing to live up to its hyped expectation. IMHO object storage is very much alive along with block, file, table (e.g. database SQL and NoSQL repositories), message/queue among others, as well as emerging blockchain aka data exchanges.

Various Industry and Customer Adoption Deployment timeline
Various Industry and Customer Adoption Deployment Timeline (Via: StorageIOblog.com)

An issue with object storage is that it is still new, still evolving, many IT environments applications do not yet speak or access objects and blobs natively. Likewise as is often the case, industry adoption and deployment is usually early and short term around the hype, vs. the longer cycle of customer adoption and deployment. The downside for those who only focus on object storage (or blobs) is that they may be under pressure to do things short term instead of adjusting to customer cycles which take longer, however real adoption and deployment also last longer.

While the hype and industry buzz around object storage (and blobs) may have faded, customer adoption continues and is here to stay, along with block, file among others, learn more at www.objectstoragecenter.com. Also keep in mind that there is a difference between industry and customer adoption along with deployment.

Some recent Industry Activities, Trends, News and Announcements include:

In case you missed it, Amazon Web Services (e.g. AWS) announced EKS (Elastic Kubernetes Service) which as its name implies, is an easy to use and manage Kubernetes (containers, serverless data infrastructure) running on AWS. AWS joins others including Microsoft Azure Kubernetes Services (AKS), Googles Kubernetes Engine, EasyStack (ESContainer for openstack and Kubernetes),VMware Pivotal Container Service (PKS) among others. What this means is that in the container serverless data infrastructure ecosystem Kubernetes container management (orchestration platform) is gaining in both industry as well as customer adoption along with deployment.

Check out other industry news, comments, trends perspectives here.

Data Infrastructure Server StorageIO Comments Content

Server StorageIO Commentary in the news, tips and articles

Recent Server StorageIO industry trends perspectives commentary in the news.

Via BizTech: Why Hybrid (SSD and HDD) Storage Might Be Fit for SMB environments
Via Excelero: Server StorageIO white paper enabling database DBaaS productivity
Via Cloudian: YouTube video interview file services on object storage with HyperFile
Via CDW Solutions: Comments on Software Defined Access
Via SearchStorage: Comments on Cloudian HyperStore on demand cloud like pricing
Via EnterpriseStorageForum: Comments and tips on Software Defined Storage Best Practices
Via PRNewsWire: Comments on Excelero NVMe NVMesh Database and DBaaS solutions
Via SearchStorage: Comments on NooBaa multi-cloud storage management
Via CDW: Comments on New IT Strategies Improve Your Bottom Line 
Via EnterpriseStorageForum: Comments on Software Defined Storage: Pros and Cons
Via DataCenterKnowledge: Comments on The Great Data Center Headache IoT
Via SearchStorage: Comments on Dell and VMware merger scenario options
Via PRNewswire: Comments on Chelsio Microsoft Validation of iWARP/RDMA
Via SearchStorage: Comments on Server Storage Industry trends and Dell EMC
Via ChannelProSMB: Comments on Hybrid HDD and SSD storage solutions
Via ChannelProNetwork: Comments on What the Future Holds for HDDs
Via HealthcareITnews: Comments on MOUNTAINS OF MOBILE DATA
Via SearchStorage: Comments on Cloudian HyperStore 7 targets multi-cloud complexities
Via GlobeNewsWire: Comments on Cloudian HyperStore 7
Via GizModo: Comments on Intel Optane 800P NVMe M.2 SSD
Via DataCenterKnowledge: Comments on getting data centers ready for IoT
Via DataCenterKnowledge: Comments on Beyond the Hype: AI in the Data Center
Via DataCenterKnowledge: Comments on Data Center and Cloud Disaster Recovery
Via SearchStoragae: Comments on Cloudian HyperFile marries NAS and object storage
Via SearchStoragae: Comments on Top 10 Tips on Solid State Storage Adoption Strategy
Via SearchStoragae: Comments on 8 Top Tips for Beating the Big Data Deluge

View more Server, Storage and I/O trends and perspectives comments here.

Data Infrastructure Server StorageIOblog posts

Server StorageIOblog Data Infrastructure Posts

Recent and popular Server StorageIOblog posts include:

Application Data Value Characteristics Everything Is Not The Same
Application Data Availability 4 3 2 1 Data Protection
AWS Cloud Application Data Protection Webinar
Microsoft Windows Server 2019 Insiders Preview
Application Data Characteristics Types Everything Is Not The Same
Application Data Volume Velocity Variety Everything Is Not The Same
Application Data Access Lifecycle Patterns Everything Is Not The Same
Veeam GDPR preparedness experiences Webinar walking the talk
VMware continues cloud construction with March announcements
Benefits of Moving Hyper-V Disaster Recovery to the Cloud Webinar
World Backup Day 2018 Data Protection Readiness Reminder
Use Intel Optane NVMe U.2 SFF 8639 SSD drive in PCIe slot
Data Infrastructure Resource Links cloud data protection tradecraft trends
How to Achieve Flexible Data Protection Availability with All Flash Storage Solutions
November 2017 Server StorageIO Data Infrastructure Update Newsletter
IT transformation Serverless Life Beyond DevOps Podcast
Data Protection Diaries Fundamental Topics Tools Techniques Technologies Tips
HPE Announces AMD Powered Gen 10 ProLiant DL385 For Software Defined Workloads
AWS Announces New S3 Cloud Storage Security Encryption Features
Introducing Windows Subsystem for Linux WSL Overview #blogtober
Hot Popular New Trending Data Infrastructure Vendors To Watch

View other recent as well as past StorageIOblog posts here

Server StorageIO Recommended Reading (Watching and Listening) List

Software-Defined Data Infrastructure Essentials SDDI SDDC

In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017) available at Amazon.com (check out special sale price), the following are Server StorageIO data infrastructure recommended reading, watching and listening list items. The Server StorageIO data infrastructure recommended reading list includes various IT, Data Infrastructure and related topics including Intel Recommended Reading List (IRRL) for developers is a good resource to check out. Speaking of my books, Didier Van Hoye (@WorkingHardInIt) has a good review over on his site you can view here, also check out the rest of his great content while there.

In case you may have missed it, here is a good presentation from AWS re:invent 2017 by Brendan Gregg (@brendangregg) about how Netflix does EC2 and other AWS tuning along with plenty of great resource links. Keith Tenzer (@keithtenzer) provides a good perspective piece about containers in a large IT enterprise environment here including various options.

Speaking of IT data centers and data infrastructure environments, checkout the list of some of the worlds most extreme habitats for technology here. Mark Betz (@markbetz) has a series of Docker and Kubernetes networking fundamentals posts on his site here, as well as over at Medium including mention of Google Cloud (@googlecloud). The posts in Marks series are good refresher or intros to how Docker and Kubernetes handles basic networking between containers, pods, nodes, hosts in clusters. Check out part I here and part II here.

Blockchain elements
Image via https://stevetodd.typepad.com

Steve Todd (@Stevetodd) has some good perspectives about Trusted Data Exchanges e.g. life beyond blockchain and bitcoin here along with core element considerations (beyond the product pitch) here, along with associated data infrastructure and storage evolution vs. revolution here.

Watch for more items to be added to the recommended reading list book shelf soon.

Data Infrastructure Server StorageIO event activities

Events and Activities

Recent and upcoming event activities.

March 27, 2018 – Webinar – Veeams Road to GDPR Compliancy The 5 Lessons Learned

Feb 28, 2018 – Webinar – Benefits of Moving Hyper-V Disaster Recovery to the Cloud

Jan 30, 2018 – Webinar – Achieve Flexible Data Protection and Availability with All Flash Storage

Nov. 9, 2017 – Webinar – All You Need To Know about ROBO Data Protection Backup

See more webinars and activities on the Server StorageIO Events page here.

Data Infrastructure Server StorageIO Industry Resources and Links

Various useful links and resources:

Data Infrastructure Recommend Reading and watching list
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/downloads – Various presentations and other download material
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

Connect and Converse With Us

Storage IO RSS storageio linkedin storageio facebook    Google+   storageio youtube  storageio instagram

Subscribe to Newsletter – Newsletter Archives StorageIO.comStorageIOblog.com

What this all means and wrap-up

Data Infrastructures are what exists inside physical data centers spanning cloud, converged, hyper-converged, virtual, serverless and other software defined as well as legacy environments. The fundamental role of data infrastructures comprising server (compute), storage, I/O networking hardware, software, services defined by management tools, best practices and policies is to provide a platform for applications along with their data to deliver information services. With March 31 being world backup day, also focus on making sure that on April 1st you are not a fool trying to recover from a bad data protection copy. With the continued movement to flash SSD along with other forms of storage class memory (SCM) and persistent memories (PM), data moves at a faster rate meaning data protection is even more important to get you out of trouble as fast as you get into issues.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Value Characteristics Everything Is Not The Same (Part I)

Application Data Value Characteristics Everything Is Not The Same

Application Data Value Characteristics Everything Is Not The Same

Application Data Value Characteristics Everything Is Not The Same

This is part one of a five-part mini-series looking at Application Data Value Characteristics Everything Is Not The Same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we start things off by looking at general application server storage I/O characteristics that have an impact on data value as well as access.

Application Data Value Software Defined Data Infrastructure Essentials Book SDDC

Everything is not the same across different organizations including Information Technology (IT) data centers, data infrastructures along with the applications as well as data they support. For example, there is so-called big data that can be many small files, objects, blobs or data and bit streams representing telemetry, click stream analytics, logs among other information.

Keep in mind that applications impact how data is accessed, used, processed, moved and stored. What this means is that a focus on data value, access patterns, along with other related topics need to also consider application performance, availability, capacity, economic (PACE) attributes.

If everything is not the same, why is so much data along with many applications treated the same from a PACE perspective?

Data Infrastructure resources including servers, storage, networks might be cheap or inexpensive, however, there is a cost to managing them along with data.

Managing includes data protection (backup, restore, BC, DR, HA, security) along with other activities. Likewise, there is a cost to the software along with cloud services among others. By understanding how applications use and interact with data, smarter, more informed data management decisions can be made.

IT Applications and Data Infrastructure Layers
IT Applications and Data Infrastructure Layers

Keep in mind that everything is not the same across various organizations, data centers, data infrastructures, data and the applications that use them. Also keep in mind that programs (e.g. applications) = algorithms (code) + data structures (how data defined and organized, structured or unstructured).

There are traditional applications, along with those tied to Internet of Things (IoT), Artificial Intelligence (AI) and Machine Learning (ML), Big Data and other analytics including real-time click stream, media and entertainment, security and surveillance, log and telemetry processing among many others.

What this means is that there are many different application with various character attributes along with resource (server compute, I/O network and memory, storage requirements) along with service requirements.

Common Applications Characteristics

Different applications will have various attributes, in general, as well as how they are used, for example, database transaction activity vs. reporting or analytics, logs and journals vs. redo logs, indices, tables, indices, import/export, scratch and temp space. Performance, availability, capacity, and economics (PACE) describes the applications and data characters and needs shown in the following figure.

Application and data PACE attributes
Application PACE attributes (via Software Defined Data Infrastructure Essentials)

All applications have PACE attributes, however:

  • PACE attributes vary by application and usage
  • Some applications and their data are more active than others
  • PACE characteristics may vary within different parts of an application

Think of applications along with associated data PACE as its personality or how it behaves, what it does, how it does it, and when, along with value, benefit, or cost as well as quality-of-service (QoS) attributes.

Understanding applications in different environments, including data values and associated PACE attributes, is essential for making informed server, storage, I/O decisions and data infrastructure decisions. Data infrastructures decisions range from configuration to acquisitions or upgrades, when, where, why, and how to protect, and how to optimize performance including capacity planning, reporting, and troubleshooting, not to mention addressing budget concerns.

Primary PACE attributes for active and inactive applications and data are:

P – Performance and activity (how things get used)
A – Availability and durability (resiliency and data protection)
C – Capacity and space (what things use or occupy)
E – Economics and Energy (people, budgets, and other barriers)

Some applications need more performance (server computer, or storage and network I/O), while others need space capacity (storage, memory, network, or I/O connectivity). Likewise, some applications have different availability needs (data protection, durability, security, resiliency, backup, business continuity, disaster recovery) that determine the tools, technologies, and techniques to use.

Budgets are also nearly always a concern, which for some applications means enabling more performance per cost while others are focused on maximizing space capacity and protection level per cost. PACE attributes also define or influence policies for QoS (performance, availability, capacity), as well as thresholds, limits, quotas, retention, and disposition, among others.

Performance and Activity (How Resources Get Used)

Some applications or components that comprise a larger solution will have more performance demands than others. Likewise, the performance characteristics of applications along with their associated data will also vary. Performance applies to the server, storage, and I/O networking hardware along with associated software and applications.

For servers, performance is focused on how much CPU or processor time is used, along with memory and I/O operations. I/O operations to create, read, update, or delete (CRUD) data include activity rate (frequency or data velocity) of I/O operations (IOPS). Other considerations include the volume or amount of data being moved (bandwidth, throughput, transfer), response time or latency, along with queue depths.

Activity is the amount of work to do or being done in a given amount of time (seconds, minutes, hours, days, weeks), which can be transactions, rates, IOPs. Additional performance considerations include latency, bandwidth, throughput, response time, queues, reads or writes, gets or puts, updates, lists, directories, searches, pages views, files opened, videos viewed, or downloads.
 
Server, storage, and I/O network performance include:

  • Processor CPU usage time and queues (user and system overhead)
  • Memory usage effectiveness including page and swap
  • I/O activity including between servers and storage
  • Errors, retransmission, retries, and rebuilds

the following figure shows a generic performance example of data being accessed (mixed reads, writes, random, sequential, big, small, low and high-latency) on a local and a remote basis. The example shows how for a given time interval (see lower right), applications are accessing and working with data via different data streams in the larger image left center. Also shown are queues and I/O handling along with end-to-end (E2E) response time.

fundamental server storage I/O
Server I/O performance fundamentals (via Software Defined Data Infrastructure Essentials)

Click here to view a larger version of the above figure.

Also shown on the left in the above figure is an example of E2E response time from the application through the various data infrastructure layers, as well as, lower center, the response time from the server to the memory or storage devices.

Various queues are shown in the middle of the above figure which are indicators of how much work is occurring, if the processing is keeping up with the work or causing backlogs. Context is needed for queues, as they exist in the server, I/O networking devices, and software drivers, as well as in storage among other locations.

Some basic server, storage, I/O metrics that matter include:

  • Queue depth of I/Os waiting to be processed and concurrency
  • CPU and memory usage to process I/Os
  • I/O size, or how much data can be moved in a given operation
  • I/O activity rate or IOPs = amount of data moved/I/O size per unit of time
  • Bandwidth = data moved per unit of time = I/O size × I/O rate
  • Latency usually increases with larger I/O sizes, decreases with smaller requests
  • I/O rates usually increase with smaller I/O sizes and vice versa
  • Bandwidth increases with larger I/O sizes and vice versa
  • Sequential stream access data may have better performance than some random access data
  • Not all data is conducive to being sequential stream, or random
  • Lower response time is better, higher activity rates and bandwidth are better

Queues with high latency and small I/O size or small I/O rates could indicate a performance bottleneck. Queues with low latency and high I/O rates with good bandwidth or data being moved could be a good thing. An important note is to look at several metrics, not just IOPs or activity, or bandwidth, queues, or response time. Also, keep in mind that metrics that matter for your environment may be different from those for somebody else.

Something to keep in perspective is that there can be a large amount of data with low performance, or a small amount of data with high-performance, not to mention many other variations. The important concept is that as space capacity scales, that does not mean performance also improves or vice versa, after all, everything is not the same.

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Keep in mind that with Application Data Value Characteristics Everything Is Not The Same across various organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. However all applications have some element (high or low) of performance, availability, capacity, economic (PACE) along with various similarities. Likewise data has different value at various times. Continue reading the next post (Part II Application Data Availability Everything Is Not The Same) in this five-part mini-series here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Availability 4 3 2 1 Data Protection

Application Data Availability 4 3 2 1 Data Protection

4 3 2 1 data protection Application Data Availability Everything Is Not The Same

Application Data Availability 4 3 2 1 Data Protection

This is part two of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we continue looking at application performance, availability, capacity, economic (PACE) attributes that have an impact on data value as well as availability.

4 3 2 1 data protection  Book SDDC

Availability (Accessibility, Durability, Consistency)

Just as there are many different aspects and focus areas for performance, there are also several facets to availability. Note that applications performance requires availability and availability relies on some level of performance.

Availability is a broad and encompassing area that includes data protection to protect, preserve, and serve (backup/restore, archive, BC, BR, DR, HA) data and applications. There are logical and physical aspects of availability including data protection as well as security including key management (manage your keys or authentication and certificates) and permissions, among other things.

Availability = accessibility (can you get to your application and data) + durability (is the data intact and consistent). This includes basic Reliability, Availability, Serviceability (RAS), as well as high availability, accessibility, and durability. “Durable” has multiple meanings, so context is important. Durable means how data infrastructure resources hold up to, survive, and tolerate wear and tear from use (i.e., endurance), for example, Flash SSD or mechanical devices such as Hard Disk Drives (HDDs). Another context for durable refers to data, meaning how many copies in various places.

Server, storage, and I/O network availability topics include:

  • Resiliency and self-healing to tolerate failure or disruption
  • Hardware, software, and services configured for resiliency
  • Accessibility to reach or be reached for handling work
  • Durability and consistency of data to be available for access
  • Protection of data, applications, and assets including security

Additional server I/O and data infrastructure along with storage topics include:

  • Backup/restore, replication, snapshots, sync, and copies
  • Basic Reliability, Availability, Serviceability, HA, fail over, BC, BR, and DR
  • Alternative paths, redundant components, and associated software
  • Applications that are fault-tolerant, resilient, and self-healing
  • Non disruptive upgrades, code (application or software) loads, and activation
  • Immediate data consistency and integrity vs. eventual consistency
  • Virus, malware, and other data corruption or loss prevention

From a data protection standpoint, the fundamental rule or guideline is 4 3 2 1, which means having at least four copies consisting of at least three versions (different points in time), at least two of which are on different systems or storage devices and at least one of those is off-site (on-line, off-line, cloud, or other). There are many variations of the 4 3 2 1 rule shown in the following figure along with approaches on how to manage technology to use. We will go into deeper this subject in later chapters. For now, remember the following.

large version application server storage I/O
4 3 2 1 data protection (via Software Defined Data Infrastructure Essentials)

4    At least four copies of data (or more), Enables durability in case a copy goes bad, deleted, corrupted, failed device, or site.
3    The number (or more) versions of the data to retain, Enables various recovery points in time to restore, resume, restart from.
2    Data located on two or more systems (devices or media/mediums), Enables protection against device, system, server, file system, or other fault/failure.

1    With at least one of those copies being off-premise and not live (isolated from active primary copy), Enables resiliency across sites, as well as space, time, distance gap for protection.

Capacity and Space (What Gets Consumed and Occupied)

In addition to being available and accessible in a timely manner (performance), data (and applications) occupy space. That space is memory in servers, as well as using available consumable processor CPU time along with I/O (performance) including over networks.

Data and applications also consume storage space where they are stored. In addition to basic data space, there is also space consumed for metadata as well as protection copies (and overhead), application settings, logs, and other items. Another aspect of capacity includes network IP ports and addresses, software licenses, server, storage, and network bandwidth or service time.

Server, storage, and I/O network capacity topics include:

  • Consumable time-expiring resources (processor time, I/O, network bandwidth)
  • Network IP and other addresses
  • Physical resources of servers, storage, and I/O networking devices
  • Software licenses based on consumption or number of users
  • Primary and protection copies of data and applications
  • Active and standby data infrastructure resources and sites
  • Data footprint reduction (DFR) tools and techniques for space optimization
  • Policies, quotas, thresholds, limits, and capacity QoS
  • Application and database optimization

DFR includes various techniques, technologies, and tools to reduce the impact or overhead of protecting, preserving, and serving more data for longer periods of time. There are many different approaches to implementing a DFR strategy, since there are various applications and data.

Common DFR techniques and technologies include archiving, backup modernization, copy data management (CDM), clean up, compress, and consolidate, data management, deletion and dedupe, storage tiering, RAID (including parity-based, erasure codes , local reconstruction codes [LRC] , and Reed-Solomon , Ceph Shingled Erasure Code (SHEC ), among others), along with protection configurations along with thin-provisioning, among others.

DFR can be implemented in various complementary locations from row-level compression in database or email to normalized databases, to file systems, operating systems, appliances, and storage systems using various techniques.

Also, keep in mind that not all data is the same; some is sparse, some is dense, some can be compressed or deduped while others cannot. Likewise, some data may not be compressible or dedupable. However, identical copies can be identified with links created to a common copy.

Economics (People, Budgets, Energy and other Constraints)

If one thing in life and technology that is constant is change, then the other constant is concern about economics or costs. There is a cost to enable and maintain a data infrastructure on premise or in the cloud, which exists to protect, preserve, and serve data and information applications.

However, there should also be a benefit to having the data infrastructure to house data and support applications that provide information to users of the services. A common economic focus is what something costs, either as up-front capital expenditure (CapEx) or as an operating expenditure (OpEx) expense, along with recurring fees.

In general, economic considerations include:

  • Budgets (CapEx and OpEx), both up front and in recurring fees
  • Whether you buy, lease, rent, subscribe, or use free and open sources
  • People time needed to integrate and support even free open-source software
  • Costs including hardware, software, services, power, cooling, facilities, tools
  • People time includes base salary, benefits, training and education

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Keep in mind that with Application Data Value Characteristics Everything Is Not The Same across various organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. All applications have some element of performance, availability, capacity, economic (PACE) needs as well as resource demands. There is often a focus around data storage about storage efficiency and utilization which is where data footprint reduction (DFR) techniques, tools, trends and as well as technologies address capacity requirements. However with data storage there is also an expanding focus around storage effectiveness also known as productivity tied to performance, along with availability including 4 3 2 1 data protection. Continue reading the next post (Part III Application Data Characteristics Types Everything Is Not The Same) in this series here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.