Application Data Availability 4 3 2 1 Data Protection

Application Data Availability 4 3 2 1 Data Protection

4 3 2 1 data protection Application Data Availability Everything Is Not The Same

Application Data Availability 4 3 2 1 Data Protection

This is part two of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we continue looking at application performance, availability, capacity, economic (PACE) attributes that have an impact on data value as well as availability.

4 3 2 1 data protection  Book SDDC

Availability (Accessibility, Durability, Consistency)

Just as there are many different aspects and focus areas for performance, there are also several facets to availability. Note that applications performance requires availability and availability relies on some level of performance.

Availability is a broad and encompassing area that includes data protection to protect, preserve, and serve (backup/restore, archive, BC, BR, DR, HA) data and applications. There are logical and physical aspects of availability including data protection as well as security including key management (manage your keys or authentication and certificates) and permissions, among other things.

Availability = accessibility (can you get to your application and data) + durability (is the data intact and consistent). This includes basic Reliability, Availability, Serviceability (RAS), as well as high availability, accessibility, and durability. “Durable” has multiple meanings, so context is important. Durable means how data infrastructure resources hold up to, survive, and tolerate wear and tear from use (i.e., endurance), for example, Flash SSD or mechanical devices such as Hard Disk Drives (HDDs). Another context for durable refers to data, meaning how many copies in various places.

Server, storage, and I/O network availability topics include:

  • Resiliency and self-healing to tolerate failure or disruption
  • Hardware, software, and services configured for resiliency
  • Accessibility to reach or be reached for handling work
  • Durability and consistency of data to be available for access
  • Protection of data, applications, and assets including security

Additional server I/O and data infrastructure along with storage topics include:

  • Backup/restore, replication, snapshots, sync, and copies
  • Basic Reliability, Availability, Serviceability, HA, fail over, BC, BR, and DR
  • Alternative paths, redundant components, and associated software
  • Applications that are fault-tolerant, resilient, and self-healing
  • Non disruptive upgrades, code (application or software) loads, and activation
  • Immediate data consistency and integrity vs. eventual consistency
  • Virus, malware, and other data corruption or loss prevention

From a data protection standpoint, the fundamental rule or guideline is 4 3 2 1, which means having at least four copies consisting of at least three versions (different points in time), at least two of which are on different systems or storage devices and at least one of those is off-site (on-line, off-line, cloud, or other). There are many variations of the 4 3 2 1 rule shown in the following figure along with approaches on how to manage technology to use. We will go into deeper this subject in later chapters. For now, remember the following.

large version application server storage I/O
4 3 2 1 data protection (via Software Defined Data Infrastructure Essentials)

4    At least four copies of data (or more), Enables durability in case a copy goes bad, deleted, corrupted, failed device, or site.
3    The number (or more) versions of the data to retain, Enables various recovery points in time to restore, resume, restart from.
2    Data located on two or more systems (devices or media/mediums), Enables protection against device, system, server, file system, or other fault/failure.

1    With at least one of those copies being off-premise and not live (isolated from active primary copy), Enables resiliency across sites, as well as space, time, distance gap for protection.

Capacity and Space (What Gets Consumed and Occupied)

In addition to being available and accessible in a timely manner (performance), data (and applications) occupy space. That space is memory in servers, as well as using available consumable processor CPU time along with I/O (performance) including over networks.

Data and applications also consume storage space where they are stored. In addition to basic data space, there is also space consumed for metadata as well as protection copies (and overhead), application settings, logs, and other items. Another aspect of capacity includes network IP ports and addresses, software licenses, server, storage, and network bandwidth or service time.

Server, storage, and I/O network capacity topics include:

  • Consumable time-expiring resources (processor time, I/O, network bandwidth)
  • Network IP and other addresses
  • Physical resources of servers, storage, and I/O networking devices
  • Software licenses based on consumption or number of users
  • Primary and protection copies of data and applications
  • Active and standby data infrastructure resources and sites
  • Data footprint reduction (DFR) tools and techniques for space optimization
  • Policies, quotas, thresholds, limits, and capacity QoS
  • Application and database optimization

DFR includes various techniques, technologies, and tools to reduce the impact or overhead of protecting, preserving, and serving more data for longer periods of time. There are many different approaches to implementing a DFR strategy, since there are various applications and data.

Common DFR techniques and technologies include archiving, backup modernization, copy data management (CDM), clean up, compress, and consolidate, data management, deletion and dedupe, storage tiering, RAID (including parity-based, erasure codes , local reconstruction codes [LRC] , and Reed-Solomon , Ceph Shingled Erasure Code (SHEC ), among others), along with protection configurations along with thin-provisioning, among others.

DFR can be implemented in various complementary locations from row-level compression in database or email to normalized databases, to file systems, operating systems, appliances, and storage systems using various techniques.

Also, keep in mind that not all data is the same; some is sparse, some is dense, some can be compressed or deduped while others cannot. Likewise, some data may not be compressible or dedupable. However, identical copies can be identified with links created to a common copy.

Economics (People, Budgets, Energy and other Constraints)

If one thing in life and technology that is constant is change, then the other constant is concern about economics or costs. There is a cost to enable and maintain a data infrastructure on premise or in the cloud, which exists to protect, preserve, and serve data and information applications.

However, there should also be a benefit to having the data infrastructure to house data and support applications that provide information to users of the services. A common economic focus is what something costs, either as up-front capital expenditure (CapEx) or as an operating expenditure (OpEx) expense, along with recurring fees.

In general, economic considerations include:

  • Budgets (CapEx and OpEx), both up front and in recurring fees
  • Whether you buy, lease, rent, subscribe, or use free and open sources
  • People time needed to integrate and support even free open-source software
  • Costs including hardware, software, services, power, cooling, facilities, tools
  • People time includes base salary, benefits, training and education

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Keep in mind that with Application Data Value Characteristics Everything Is Not The Same across various organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. All applications have some element of performance, availability, capacity, economic (PACE) needs as well as resource demands. There is often a focus around data storage about storage efficiency and utilization which is where data footprint reduction (DFR) techniques, tools, trends and as well as technologies address capacity requirements. However with data storage there is also an expanding focus around storage effectiveness also known as productivity tied to performance, along with availability including 4 3 2 1 data protection. Continue reading the next post (Part III Application Data Characteristics Types Everything Is Not The Same) in this series here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Characteristics Types Everything Is Not The Same

Application Data Characteristics Types Everything Is Not The Same

Application Data Characteristics Types Everything Is Not The Same

Application Data Characteristics Types Everything Is Not The Same

This is part three of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we continue looking at application and data characteristics with a focus on different types of data. There is more to data than simply being big data, fast data, big fast or unstructured, structured or semistructured, some of which has been touched on in this series, with more to follow. Note that there is also data in terms of the programs, applications, code, rules, policies as well as configuration settings, metadata along with other items stored.

Application Data Value Software Defined Data Infrastructure Essentials Book SDDC

Various Types of Data

Data types along with characteristics include big data, little data, fast data, and old as well as new data with a different value, life-cycle, volume and velocity. There are data in files and objects that are big representing images, figures, text, binary, structured or unstructured that are software defined by the applications that create, modify and use them.

There are many different types of data and applications to meet various business, organization, or functional needs. Keep in mind that applications are based on programs which consist of algorithms and data structures that define the data, how to use it, as well as how and when to store it. Those data structures define data that will get transformed into information by programs while also being stored in memory and on data stored in various formats.

Just as various applications have different algorithms, they also have different types of data. Even though everything is not the same in all environments, or even how the same applications get used across various organizations, there are some similarities. Even though there are different types of applications and data, there are also some similarities and general characteristics. Keep in mind that information is the result of programs (applications and their algorithms) that process data into something useful or of value.

Data typically has a basic life cycle of:

  • Creation and some activity, including being protected
  • Dormant, followed by either continued activity or going inactive
  • Disposition (delete or remove)

In general, data can be

  • Temporary, ephemeral or transient
  • Dynamic or changing (“hot data”)
  • Active static on-line, near-line, or off-line (“warm-data”)
  • In-active static on-line or off-line (“cold data”)

Data is organized

  • Structured
  • Semi-structured
  • Unstructured

General data characteristics include:

  • Value = From no value to unknown to some or high value
  • Volume = Amount of data, files, objects of a given size
  • Variety = Various types of data (small, big, fast, structured, unstructured)
  • Velocity = Data streams, flows, rates, load, process, access, active or static

The following figure shows how different data has various values over time. Data that has no value today or in the future can be deleted, while data with unknown value can be retained.

Different data with various values over time

Application Data Value across sddc
Data Value Known, Unknown and No Value

General characteristics include the value of the data which in turn determines its performance, availability, capacity, and economic considerations. Also, data can be ephemeral (temporary) or kept for longer periods of time on persistent, non-volatile storage (you do not lose the data when power is turned off). Examples of temporary scratch include work and scratch areas such as where data gets imported into, or exported out of, an application or database.

Data can also be little, big, or big and fast, terms which describe in part the size as well as volume along with the speed or velocity of being created, accessed, and processed. The importance of understanding characteristics of data and how their associated applications use them is to enable effective decision-making about performance, availability, capacity, and economics of data infrastructure resources.

Data Value

There is more to data storage than how much space capacity per cost.

All data has one of three basic values:

  • No value = ephemeral/temp/scratch = Why keep it?
  • Some value = current or emerging future value, which can be low or high = Keep
  • Unknown value = protect until value is unlocked, or no remaining value

In addition to the above basic three, data with some value can also be further subdivided into little value, some value, or high value. Of course, you can keep subdividing into as many more or different categories as needed, after all, everything is not always the same across environments.

Besides data having some value, that value can also change by increasing or decreasing in value over time or even going from unknown to a known value, known to unknown, or to no value. Data with no value can be discarded, if in doubt, make and keep a copy of that data somewhere safe until its value (or lack of value) is fully known and understood.

The importance of understanding the value of data is to enable effective decision-making on where and how to protect, preserve, and cost-effectively store the data. Note that cost-effective does not necessarily mean the cheapest or lowest-cost approach, rather it means the way that aligns with the value and importance of the data at a given point in time.

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software-defined data center (SDDC), software-defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Data has different value at various times, and that value is also evolving. Everything Is Not The Same across various organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. Continue reading the next post (Part IV Application Data Volume Velocity Variety Everything Not The Same) in this series here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Volume Velocity Variety Everything Is Not The Same

Application Data Volume Velocity Variety Everything Not The Same

Application Data Volume Velocity Variety Everything Not The Same

This is part four of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we continue looking at application and data characteristics with a focus on data volume velocity and variety, after all, everything is not the same, not to mention many different aspects of big data as well as little data.

Application Data Value Software Defined Data Infrastructure Essentials Book SDDC

Volume of Data

More data is growing at a faster rate every day, and that data is being retained for longer periods. Some data being retained has known value, while a growing amount of data has an unknown value. Data is generated or created from many sources, including mobile devices, social networks, web-connected systems or machines, and sensors including IoT and IoD. Besides where data is created from, there are also many consumers of data (applications) that range from legacy to mobile, cloud, IoT among others.

Unknown-value data may eventually have value in the future when somebody realizes that he can do something with it, or a technology tool or application becomes available to transform the data with unknown value into valuable information.

Some data gets retained in its native or raw form, while other data get processed by application program algorithms into summary data, or is curated and aggregated with other data to be transformed into new useful data. The figure below shows, from left to right and front to back, more data being created, and that data also getting larger over time. For example, on the left are two data items, objects, files, or blocks representing some information.

In the center of the following figure are more columns and rows of data, with each of those data items also becoming larger. Moving farther to the right, there are yet more data items stacked up higher, as well as across and farther back, with those items also being larger. The following figure can represent blocks of storage, files in a file system, rows, and columns in a database or key-value repository, or objects in a cloud or object storage system.

Application Data Value sddc
Increasing data velocity and volume, more data and data getting larger

In addition to more data being created, some of that data is relatively small in terms of the records or data structure entities being stored. However, there can be a large quantity of those smaller data items. In addition to the amount of data, as well as the size of the data, protection or overhead copies of data are also kept.

Another dimension is that data is also getting larger where the data structures describing a piece of data for an application have increased in size. For example, a still photograph was taken with a digital camera, cell phone, or another mobile handheld device, drone, or other IoT device, increases in size with each new generation of cameras as there are more megapixels.

Variety of Data

In addition to having value and volume, there are also different varieties of data, including ephemeral (temporary), persistent, primary, metadata, structured, semi-structured, unstructured, little, and big data. Keep in mind that programs, applications, tools, and utilities get stored as data, while they also use, create, access, and manage data.

There is also primary data and metadata, or data about data, as well as system data that is also sometimes referred to as metadata. Here is where context comes into play as part of tradecraft, as there can be metadata describing data being used by programs, as well as metadata about systems, applications, file systems, databases, and storage systems, among other things, including little and big data.

Context also matters regarding big data, as there are applications such as statistical analysis software and Hadoop, among others, for processing (analyzing) large amounts of data. The data being processed may not be big regarding the records or data entity items, but there may be a large volume. In addition to big data analytics, data, and applications, there is also data that is very big (as well as large volumes or collections of data sets).

For example, video and audio, among others, may also be referred to as big fast data, or large data. A challenge with larger data items is the complexity of moving over the distance promptly, as well as processing requiring new approaches, algorithms, data structures, and storage management techniques.

Likewise, the challenges with large volumes of smaller data are similar in that data needs to be moved, protected, preserved, and served cost-effectively for long periods of time. Both large and small data are stored (in memory or storage) in various types of data repositories.

In general, data in repositories is accessed locally, remotely, or via a cloud using:

  • Object and blobs stream, queue, and Application Programming Interface (API)
  • File-based using local or networked file systems
  • Block-based access of disk partitions, LUNs (logical unit numbers), or volumes

The following figure shows varieties of application data value including (left) photos or images, audio, videos, and various log, event, and telemetry data, as well as (right) sparse and dense data.

Application Data Value bits bytes blocks blobs bitstreams sddc
Varieties of data (bits, bytes, blocks, blobs, and bitstreams)

Velocity of Data

Data, in addition to having value (known, unknown, or none), volume (size and quantity), and variety (structured, unstructured, semi structured, primary, metadata, small, big), also has velocity. Velocity refers to how fast (or slowly) data is accessed, including being stored, retrieved, updated, scanned, or if it is active (updated, or fixed static) or dormant and inactive. In addition to data access and life cycle, velocity also refers to how data is used, such as random or sequential or some combination. Think of data velocity as how data, or streams of data, flow in various ways.

Velocity also describes how data is used and accessed, including:

  • Active (hot), static (warm and WORM), or dormant (cold)
  • Random or sequential, read or write-accessed
  • Real-time (online, synchronous) or time-delayed

Why this matters is that by understanding and knowing how applications use data, or how data is accessed via applications, you can make informed decisions. Also, having insight enables how to design, configure, and manage servers, storage, and I/O resources (hardware, software, services) to meet various needs. Understanding Application Data Value including the velocity of the data both for when it is created as well as when used is important for aligning the applicable performance techniques and technologies.

Where to learn more

Learn more about Application Data Value, application characteristics, performance, availability, capacity, economic (PACE) along with data protection, software-defined data center (SDDC), software-defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Data has different value, size, as well as velocity as part of its characteristic including how used by various applications. Keep in mind that with Application Data Value Characteristics Everything Is Not The Same across various organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. Continue reading the next post (Part V Application Data Access life cycle Patterns Everything Is Not The Same) in this series here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Access Lifecycle Patterns Everything Is Not The Same

Application Data Access Life cycle Patterns Everything Is Not The Same(Part V)

Application Data Access Life cycle Patterns Everything Is Not The Same

Application Data Access Life cycle Patterns Everything Is Not The Same

This is part five of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we look at various application and data lifecycle patterns as well as wrap up this series.

Application Data Value Software Defined Data Infrastructure Essentials Book SDDC

Active (Hot), Static (Warm and WORM), or Dormant (Cold) Data and Lifecycles

When it comes to Application Data Value, a common question I hear is why not keep all data?

If the data has value, and you have a large enough budget, why not? On the other hand, most organizations have a budget and other constraints that determine how much and what data to retain.

Another common question I get asked (or told) it isn’t the objective to keep less data to cut costs?

If the data has no value, then get rid of it. On the other hand, if data has value or unknown value, then find ways to remove the cost of keeping more data for longer periods of time so its value can be realized.

In general, the data life cycle (called by some cradle to grave, birth or creation to disposition) is created, save and store, perhaps update and read with changing access patterns over time, along with value. During that time, the data (which includes applications and their settings) will be protected with copies or some other technique, and eventually disposed of.

Between the time when data is created and when it is disposed of, there are many variations of what gets done and needs to be done. Considering static data for a moment, some applications and their data, or data and their applications, create data which is for a short period, then goes dormant, then is active again briefly before going cold (see the left side of the following figure). This is a classic application, data, and information life-cycle model (ILM), and tiering or data movement and migration that still applies for some scenarios.

Application Data Value
Changing data access patterns for different applications

However, a newer scenario over the past several years that continues to increase is shown on the right side of the above figure. In this scenario, data is initially active for updates, then goes cold or WORM (Write Once/Read Many); however, it warms back up as a static reference, on the web, as big data, and for other uses where it is used to create new data and information.

Data, in addition to its other attributes already mentioned, can be active (hot), residing in a memory cache, buffers inside a server, or on a fast storage appliance or caching appliance. Hot data means that it is actively being used for reads or writes (this is what the term Heat map pertains to in the context of the server, storage data, and applications. The heat map shows where the hot or active data is along with its other characteristics.

Context is important here, as there are also IT facilities heat maps, which refer to physical facilities including what servers are consuming power and generating heat. Note that some current and emerging data center infrastructure management (DCIM) tools can correlate the physical facilities power, cooling, and heat to actual work being done from an applications perspective. This correlated or converged management view enables more granular analysis and effective decision-making on how to best utilize data infrastructure resources.

In addition to being hot or active, data can be warm (not as heavily accessed) or cold (rarely if ever accessed), as well as online, near-line, or off-line. As their names imply, warm data may occasionally be used, either updated and written, or static and just being read. Some data also gets protected as WORM data using hardware or software technologies. WORM (immutable) data, not to be confused with warm data, is fixed or immutable (cannot be changed).

When looking at data (or storage), it is important to see when the data was created as well as when it was modified. However, you should avoid the mistake of looking only at when it was created or modified: Instead, also look to see when it was the last read, as well as how often it is read. You might find that some data has not been updated for several years, but it is still accessed several times an hour or minute. Also, keep in mind that the metadata about the actual data may be being updated, even while the data itself is static.

Also, look at your applications characteristics as well as how data gets used, to see if it is conducive to caching or automated tiering based on activity, events, or time. For example, there is a large amount of data for an energy or oil exploration project that normally sits on slower lower-cost storage, but that now and then some analysis needs to run on.

Using data and storage management tools, given notice or based on activity, which large or big data could be promoted to faster storage, or applications migrated to be closer to the data to speed up processing. Another example is weekly, monthly, quarterly, or year-end processing of financial, accounting, payroll, inventory, or enterprise resource planning (ERP) schedules. Knowing how and when the applications use the data, which is also understanding the data, automated tools, and policies, can be used to tier or cache data to speed up processing and thereby boost productivity.

All applications have performance, availability, capacity, economic (PACE) attributes, however:

  • PACE attributes vary by Application Data Value and usage
  • Some applications and their data are more active than others
  • PACE characteristics may vary within different parts of an application
  • PACE application and data characteristics along with value change over time

Read more about Application Data Value, PACE and application characteristics in Software Defined Data Infrastructure Essentials (CRC Press 2017).

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Keep in mind that Application Data Value everything is not the same across various organizations, data centers, data infrastructures, data and the applications that use them.

Also keep in mind that there is more data being created, the size of those data items, files, objects, entities, records are also increasing, as well as the speed at which they get created and accessed. The challenge is not just that there is more data, or data is bigger, or accessed faster, it’s all of those along with changing value as well as diverse applications to keep in perspective. With new Global Data Protection Regulations (GDPR) going into effect May 25, 2018, now is a good time to assess and gain insight into what data you have, its value, retention as well as disposition policies.

Remember, there are different data types, value, life-cycle, volume and velocity that change over time, and with Application Data Value Everything Is Not The Same, so why treat and manage everything the same?

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

VMware continues cloud construction with March announcements

VMware continues cloud construction with March announcements

VMware continues cloud construction sddc

VMware continues cloud construction with March announcements of new features and other enhancements.

VMware continues cloud construction SDDC data infrastructure strategy big picture
VMware Cloud Provides Consistent Operations and Infrastructure Via: VMware.com

With its recent announcements, VMware continues cloud construction adding new features, enhancements, partnerships along with services.

VMware continues cloud construction, like other vendors and service providers who tried and test the waters of having their own public cloud, VMware has moved beyond its vCloud Air initiative selling that to OVH. VMware which while being a public traded company (VMW) is by way of majority ownership part of the Dell Technologies family of company via the 2016 acquisition of EMC by Dell. What this means is that like Dell Technologies, VMware is focused on providing solutions and services to its cloud provider partners instead of building, deploying and running its own cloud in competition with partners.

VMware continues cloud construction SDDC data infrastructure strategy layers
VMware Cloud Data Infrastructure and SDDC layers Via: VMware.com

The VMware Cloud message and strategy is focused around providing software solutions to cloud and other data infrastructure partners (and customers) instead of competing with them (e.g. divesting of vCloud Air, partnering with AWS, IBM Softlayer). Part of the VMware cloud message and strategy is to provide consistent operations and management across clouds, containers, virtual machines (VM) as well as other software defined data center (SDDC) and software defined data infrastructures.

In other words, what this means is VMware providing consistent management to leverage common experiences of data infrastructure staff along with resources in a hybrid, cross cloud and software defined environment in support of existing as well as cloud native applications.

VMware continues cloud construction on AWS SDDC

VMware Cloud on AWS Image via: AWS.com

Note that VMware Cloud services run on top of AWS EC2 bare metal (BM) server instances, as well as on BM instances at IBM softlayer as well as OVH. Learn more about AWS EC2 BM compute instances aka Metal as a Service (MaaS) here. In addition to AWS, IBM and OVH, VMware claims over 4,000 regional cloud and managed service providers who have built their data infrastructures out using VMware based technologies.

VMware continues cloud construction updates

Building off of previous announcements, VMware continues cloud construction with enhancements to their Amazon Web Services (AWS) partnership along with services for IBM Softlayer cloud as well as OVH. As a refresher, OVH is what formerly was known as VMware vCloud air before it was sold off.

Besides expanding on existing cloud partner solution offerings, VMware also announced additional cloud, software defined data center (SDDC) and other software defined data infrastructure environment management capabilities. SDDC and Data infrastructure management tools include leveraging VMwares acquisition of Wavefront among others.

VMware Cloud Updates and New Features

  • VMware Cloud on AWS European regions (now in London, adding Frankfurt German)
  • Stretch Clusters with synchronous replication for cross geography location resiliency
  • Support for data intensive workloads including data footprint reduction (DFR) with vSAN based compression and data de duplication
  • Fujitsu services offering relationships
  • Expanded VMware Cloud Services enhancements

VMware Cloud Services enhancements include:

  • Hybrid Cloud Extension
  • Log intelligence
  • Cost insight
  • Wavefront

VMware Cloud in additional AWS Regions

As part of service expansion, VMware Cloud on AWS has been extended into European region (London) with plans to expand into Frankfurt and an Asian Pacific location. Previously VMware Cloud on AWS has been available in US West Oregon and US East Northern Virginia regions. Learn more about AWS Regions and availability zones (AZ) here.

VMware Cloud Stretch Cluster

VMware Cloud on AWS Stretch Clusters Source: VMware.com

VMware Cloud on AWS Stretch Clusters

In addition to expanding into additional regions, VMware Cloud on AWS is also being extended with stretch clusters for geography dispersed protection. Stretched clusters provide protection against an AZ failure (e.g. data center site) for mission critical applications. Build on vSphere HA and DRS  automated host failure technology, stretched clusters provide recovery point objective zero (RPO 0) for continuous protection, high availability across AZs at the data infrastructure layer.

The benefit of data infrastructure layer based HA and resiliency is not having to re architect or modify upper level, higher up layered applications or software. Synchronous replication between AZs enables RPO 0, if one AZ goes down, it is treated as a vSphere HA event with VMs restarted in another AZ.

vSAN based Data Footprint Reduction (DFR) aka Compression and De duplication

To support applications that leverage large amounts of data, aka data intensive applications in marketing speak, VMware is leveraging vSAN based data footprint reduction (DFR) techniques including compression as well as de duplication (dedupe). Leveraging DFR technologies like compression and dedupe integrated into vSAN, VMware Clouds have the ability to store more data in a given cubic density. Storing more data in a given cubic density storage efficiency (e.g. space saving utilization) as well as with performance acceleration, also facilitate storage effectiveness along with productivity.

With VMware vSAN technology as one of the core underlying technologies for enabling VMware Cloud on AWS (among other deployments), applications with large data needs can store more data at a lower cost point. Note that VMware Cloud can support 10 clusters per SDDC deployment, with each cluster having 32 nodes, with cluster wide and aware dedupe. Also note that for performance, VMware Cloud on AWS leverages NVMe attached Solid State Devices (SSD) to boost effectiveness and productivity.

VMware Hybrid Cloud Extension

Extending VMware vSphere any to any migration across clouds Source: VMware.com

VMware Hybrid Cloud Extension

VMware Hybrid Cloud Extension enables common management of common underlying data infrastructure as well as software defined environments including across public, private as well as hybrid clouds. Some of the capabilities include enabling warm VM migration across various software defined environments from local on-premises and private cloud to public clouds.

New enhancements leverages previously available technology now as a service for enterprises besides service providers to support data center to data center, or cloud centric AZ to AZ, as well as region to region migrations. Some of the use cases include small to large bulk migrations of hundreds to thousands of VM move and migrations, both scheduling as well as the actual move. Move and migrations can span hybrid deployments with mix of on-premises as well as various cloud services.

VMware Cloud Cost Insight

VMware Cost Insight enables analysis, compare cloud costs across public AWS, Azure and private VMware clouds) to avoid flying blind in and among clouds. VMware Cloud cost insight enables awareness of how resources are used, their cost and benefit to applications as well as IT budget impacts. Integrates vSAN sizer tool along with AWS metrics for improved situational awareness, cost modeling, analysis and what if comparisons.

With integration to Network insight, VMware Cloud Cost Insight also provides awareness into networking costs in support of migrations. What this means is that using VMware Cloud Cost insight you can take the guess-work out of what your expenses will be for public, private on-premisess or hybrid cloud will be having deeper insight awareness into your SDDC environment. Learn more about VVMware Cost Insight here.

VMware Log Intelligence

Log Intelligence is a new VMware cloud service that provides real-time data infrastructure insight along with application visibility from private, on-premises, to public along with hybrid clouds. As its name implies, Log Intelligence provides syslog and other log insight, analysis and intelligence with real-time visibility into VMware as well as AWS among other resources for faster troubleshooting, diagnostics, event correlation and other data infrastructure management tasks.

Log and telemetry input sources for VMware Log Intelligence include data infrastructure resources such as operating systems, servers, system statistics, security, applications among other syslog events. For those familiar with VMware Log Insight, this capability is an extension of that known experience expanding it to be a cloud based service.

VMware Wavefront SaaS analytics
Wavefront by VMware Source: VMware.com

VMware Wavefront

VMware Wavefront enables monitoring of cloud native high scale environments with custom metrics and analytics. As a reminder Wavefront was acquired by VMware to enable deep metrics and analytics for developers, DevOps, data infrastructure operations as well as SaaS application developers among others. Wavefront integrates with VMware vRealize along with enabling monitoring of AWS data infrastructure resources and services. With the ability to ingest, process, analyze various data feeds, the Wavefront engine enables the predictive understanding of mixed application, cloud native data and data infrastructure platforms including big data based.

Where to learn more

Learn more about VMware, vSphere, vRealize, VMware Cloud, AWS (and other clouds), along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

VMware continues cloud construction. For now, it appears that VMware like Dell Technologies is content on being a technology provider partner to large as well as small public, private and hybrid cloud environments instead of building their own and competing. With these series of announcements, VMware continues cloud construction enabling its partners and customers on their various software defined data center (SDDC) and related data infrastructure journeys. Overall, this is a good set of enhancements, updates, new and evolving features for their partners as well as customers who leverage VMware based technologies. Meanwhile VMware continues cloud construction.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

HPE Announces AMD Powered Gen 10 ProLiant DL385 For Software Defined Workloads

HPE Announces AMD Powered Gen 10 ProLiant DL385 For Software Defined Workloads

server storage I/O data infrastructure trends

By Greg Schulzwww.storageioblog.com November 20, 2017

HPE Announced today a new AMD EPYC 7000 Powered Gen 10 ProLiant DL385 for Software Defined Workloads including server virtualization, software-defined data center (SDDC), software-defined data infrastructure (SDDI), software-defined storage among others. These new servers are part of a broader Gen10 HPE portfolio of ProLiant DL systems.

HPE AMD EPYC Gen10 DL385
24 Small Form Factor Drive front view DL385 Gen 10 Via HPE

The value proposition being promoted by HPE of these new AMD powered Gen 10 DL385 servers besides supporting software-defined, SDDI, SDDC, and related workloads are security, density and lower price than others. HPE is claiming with the new AMD EPYC system on a chip (SoC) processor powered Gen 10 DL385 that it is offering up to 50 percent lower cost per virtual machine (VM) than traditional server solutions.

About HPE AMD Powered Gen 10 DL385

HPE AMD EPYC 7000 Gen 10 DL385 features:

  • 2U (height) form factor
  • HPE OneView and iLO management
  • Flexible HPE finance options
  • Data Infrastructure Security
  • AMD EPYC 7000 System on Chip (SoC) processors
  • NVMe storage (Embedded M.2 and U.2/8639 Small Form Factor (SFF) e.g. drive form factor)
  • Address server I/O and memory bottlenecks

These new HPE servers are positioned for:

  • Software Defined, Server Virtualization
  • Virtual Desktop Infrastructure (VDI) workspaces
  • HPC, Cloud and other general high-density workloads
  • General Data Infrastructure workloads that benefit from memory-centric or GPUs

Different AMD Powered DL385 ProLiant Gen 10 Packaging Options

Common across AMD EPYC 7000 powered Gen 10 DL385 servers are 2U high form factor, iLO management software and interfaces, flexible LAN on Motherboard (LOM) options, MicroSD (optional dual MicroSD), NVMe (embedded M.2 and SFF U.2) server storage I/O interface and drives, health and status LEDs, GPU support, single or dual socket processors.

HPE AMD EPYC Gen10 DL385 Look Inside
HPE DL385 Gen10 Inside View Via HPE

HPE AMD EPYC Gen10 DL385 Rear View
HPE DL385 Gen10 Rear View Via HPE

Other up to three storage drive bays, support for Large Form Factor (LFF) and Small Form Factor (SFF) devices (HDD and SSD) including SFF NVMe (e.g., U.2) SSD. Up to 4 x Gbe NICs, PCIe riser for GPU (optional second riser requires the second processor). Other features and options include HPE SmartArray (RAID), up to 6 cooling fans, internal and external USB 3. Optional universal media bay that can also add a front display, optional Optical Disc Drive (ODD), optional 2 x U.2 NVMe SFF SSD. Note media bay occupies one of three storage drive bays.

HPE AMD EPYC Gen10 DL385 Form Factor
HPE DL385 Form Factor Via HPE

Up to 3 x Drive Bays
Up to 12 LFF drives (2 per bay)
Up to 24 SFF drives ( 3 x 8 drive bays, 6 SFF + 2 NVMe U.2 or 8 x NVMe)

AMD EPYC 7000 Series

The AMD EPYC 7000 series is available in the single and dual socket. View additional AMD EPYC speeds and feeds in this data sheet (PDF), along with AMD server benchmarks here.

HPE AMD EPYC Specifications
HPE DL385 Gen 10 AMD EPYC Specifications Via HPE

AMD EPYC 7000 General Features

  • Single and dual socket
  • Up to 32 cores, 64 threads per socket
  • Up to 16 DDR4 DIMMS over eight channels per socket (e.g., up to 2TB RAM)
  • Up to 128 PCIe Gen 3 lanes (e.g. combination of x4, x8, x16 etc)
  • Future 128GB DIMM support

AMD EPYC 7000 Security Features

  • Secure processor and secure boot for malware rootkit protection
  • System memory encryption (SME)
  • Secure Encrypted Virtualization (SEV) hypervisors and guest virtual machine memory protection
  • Secure move (e.g., encrypted) between enabled servers

Where To Learn More

Learn more about Data Infrastructure and related server technology, trends, tools, techniques, tradecraft and tips with the following links.

  • AMD EPYC 7000 System on Chip (SoC) processors
  • Gen10 HPE portfolio and ProLiant DL systems.
  • Various Data Infrastructure related news commentary, events, tips and articles
  • Data Center and Data Infrastructure industry links
  • Data Infrastructure server storage I/O network Recommended Reading List Book Shelf
  • Software Defined Data Infrastructure Essentials (CRC 2017) Book
  • What This All Means

    With the flexible options including HDD, SSD as well as NVMe accessible SSDs, large memory capacity along with computing cores, these new solutions provide good data infrastructure server density (e.g., CPU, memory, I/O, storage) per cubic foot or meter per cost.

    I look forward to trying one of these systems out for software-defined scenarios including virtualization, software-defined storage (SDS) among others workload scenarios. Overall the HPE announcement of the new AMD EPYC 7000 Powered Gen 10 ProLiant DL385 looks to be a good option for many environments.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    AWS Announces New S3 Cloud Storage Security Encryption Features

    AWS Announces New S3 Cloud Storage Security Encryption Features

    server storage I/O data infrastructure trends

    Updated 1/17/2018

    Amazon Web Services (AWS) recently announced new Simple Storage Service (S3) e.g. AWS S3 encryption and security enhancements including Default Encryption, Permission Checks, Cross-Region Replication ACL Overwrite, Cross-Region Replication with KMS and Detailed Inventory Report. Another recent announcement by AWS is for PrivateLinks endpoints within a Virtual Private Cloud (VPC).

    AWS Dashboard
    AWS Service Dashboard

    Default Encryption

    Extending previous security features, now you can mandate all objects stored in a given S3 bucket be encrypted without specifying a bucket policy that rejects non-encrypted objects. There are three server-side encryption (SSE) options for S3 objects including keys managed by S3, AWS KMS and SSE Customer ( SSE-C) managed keys. These options provide more flexibility as well as control for different environments along with increased granularity. Note that encryption can be forced on all objects in a bucket by specifying a bucket encryption configuration. When an unencrypted object is stored in an encrypted bucket, it will inherit the same encryption as the bucket, or, alternately specified by a PUT required.

    AWS S3 Bucket Encryption
    AWS S3 Buckets

    Permission Checks

    There is now an indicator on the S3 console dashboard prominently indicating which S3 buckets are publicly accessible. In the above image, some of my AWS S3 buckets are shown including one that is public facing. Note in the image above how there is a notion next to buckets that are open to public.

    Cross-Region Replication ACL Overwrite and KMS

    AWS Key Management Service (KMS) keys can be used for encrypting objects. Building on previous cross-region replication capabilities, now when you replicate objects across AWS accounts, a new ACL providing full access to the destination account can be specified.

    Detailed Inventory Report

    The S3 Inventory report ( which can also be encrypted) now includes the encryption status of each object.

    PrivateLink for AWS Services

    PrivateLinks enable AWS customers to access services from a VPC without using a public IP as well as traffic not having to go across the internet (e.g. keeps traffic within the AWS network. PrivateLink endpoints appear in Elastic Network Interface (ENI) with private IPs in your VPC and are highly available, resiliency and scalable. Besides scaling and resiliency, PrivateLink eliminates the need for white listing of public IPs as well as managing internet gateway, NAT and firewall proxies to connect to AWS services (Elastic Cloud Compute (EC2), Elastic Load Balancer (ELB), Kinesis Streams, Service Catalog, EC2 Systems Manager). Learn more about AWS PrivateLink for services here including  VPC Endpoint Pricing here

    Where To Learn More

    Learn more about related technology, trends, tools, techniques, and tips with the following links.

    What This All Means

    Common cloud concern considerations include privacy and security. AWS S3 among other industry cloud service and storage providers have had their share of not so pleasant news coverage involving security.

    Keep in mind that data protection including security is a shared responsibility (and only you can prevent data loss). This means that the vendor or service provider has to take care of their responsibility making sure their solutions have proper data protection and security features by default, as well as extensions, and making those capabilities known to consumers.

    The other part of shared responsibility is that consumers and users of cloud services need to know what the capabilities are, defaults and options as well as when to use various approaches. Ultimately it is up to the user of a cloud service to implement best practices to leverage cloud as well as their own on-premises technologies so that they can support data infrastructure that in turn protect, preserve, secure and serve information (along with their applications and data).

    These are good enhancements by AWS to make their S3 cloud storage security encryption features available as well as provide options and awareness for users on how to use those capabilities.

     

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Data Infrastructure server storage I/O network Recommended Reading #blogtober

    server storage I/O data infrastructure trends recommended reading list

    Updated 7/30/2018

    The following is an evolving recommended reading list of data infrastructure topics including, server, storage I/O, networking, cloud, virtual, container, data protection and related topics that includes books, blogs, podcast’s, events and industry links among other resources.

    Various Data Infrastructure including hardware, software, services related links:

    Links A-E
    Links F-J
    Links K-O
    Links P-T
    Links U-Z
    Other Links

    In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017), the following are Server StorageIO recommended reading list items . The recommended reading list includes various IT, Data Infrastructure and related topics.

    Intel Recommended Reading List (IRRL) for developers is a good resource to check out.

    Duncan Epping (@DuncanYB), Frank Denneman (@FrankDenneman) and Neils Hagoort (@NHagoort) have released their VMware vSphere 6.7 Clustering Deep Dive book available at venues including Amazon.com. This is the latest in a series of Cluster and deep dive books from Frank and Duncan which if you are involved with VMware, SDDC and related software defined data infrastructures these should be on your bookshelf.

    Check out the Blogtober list of check out some of the blogs and posts occurring during October 2017 here.

    Preston De Guise aka @backupbear is Author of several books has an interesting new site Foolsrushin.info that looks at topics including Ethics in IT among others. Check out his new book Data Protection: Ensuring Data Availability (CRC Press 2017) and available via Amazon.com here.

    Brendan Gregg has a great site for Linux performance related topics here.

    Greg Knieriemen has a must read weekly blog, post, column collection of whats going on in and around the IT and data infrastructure related industries, Check it out here.

    Interested in file systems, CIFS, SMB, SAMBA and related topics then check out Chris Hertels book on implementing CIFS here at Amazon.com

    For those involved with VMware, check out Frank Denneman VMware vSphere 6.5 host resource guide-book here at Amazon.com.

    Docker: Up & Running: Shipping Reliable Containers in Production by Karl Matthias & Sean P. Kane via Amazon.com here.

    Essential Virtual SAN (VSAN): Administrator’s Guide to VMware Virtual SAN,2nd ed. by Cormac Hogan & Duncan Epping via Amazon.com here.

    Hadoop: The Definitive Guide: Storage and Analysis at Internet Scale by Tom White via Amazon.com here.

    Systems Performance: Enterprise and the Cloud by Brendan Gregg Via Amazon.com here.

    Implementing Cloud Storage with OpenStack Swift by Amar Kapadia, Sreedhar Varma, & Kris Rajana Via Amazon.com here.

    The Human Face of Big Data by Rick Smolan & Jennifer Erwitt Via Amazon.com here.

    VMware vSphere 5.1 Clustering Deepdive (Vol. 1) by Duncan Epping & Frank Denneman Via Amazon.com here. Note: This is an older title, but there are still good fundamentals in it.

    Linux Administration: A Beginners Guide by Wale Soyinka Via Amazon.com here.

    TCP/IP Network Administration by Craig Hunt Via Amazon.com here.

    Cisco IOS Cookbook: Field tested solutions to Cisco Router Problems by Kevin Dooley and Ian Brown Via Amazon.com here.

    I often mention in presentations a must have for anybody involved with software defined anything, or programming for that matter which is the Niklaus Wirth classic Algorithms + Data Structures = Programs that you can get on Amazon.com here.

    Seven Databases in Seven Weeks including NoSQL

    Another great book to have is Seven Databases in Seven Weeks (here is a book review) which not only provides an overview of popular NoSQL databases such as Cassandra, Mongo, HBASE among others, lots of good examples and hands on guides. Get your copy here at Amazon.com.

    Additional Data Infrastructure and related topic sites

    In addition to those mentioned above, other sites, venues and data infrastructure related resources include:

    aiim.com – Archiving and records management trade group

    apache.org – Various open-source software

    blog.scottlowe.org – Scott Lowe VMware Networking and topics

    blogs.msdn.microsoft.com/virtual_pc_guy – Ben Armstrong Hyper-V blog

    brendangregg.com – Linux performance-related topics

    cablemap.info – Global network maps

    CMG.org – Computer Measurement Group (CMG)

    communities.vmware.com – VMware technical community and resources

    comptia.org – Various IT, cloud, and data infrastructure certifications

    cormachogan.com – Cormac Hogan VMware and vSAN related topics

    csrc.nist.gov – U.S. government cloud specifications

    dmtf.org – Distributed Management Task Force (DMTF)

    ethernetalliance.org – Ethernet industry trade group

    fibrechannel.org – Fibre Channel trade group

    github.com – Various open-source solutions and projects

    Intel Reading List – recommended reading list for developers

    ieee.org – Institute of Electrical and Electronics Engineers

    ietf.org – Internet Engineering Task Force

    iso.org – International Standards Organizations

    it.toolbox.com – Various IT and data infrastructure topics forums

    labs.vmware.com/flings – VMware Fling additional tools and software

    nist.gov – National Institute of Standards and Technology

    nvmexpress.org – NVM Express (NVMe) industry trade group

    objectstoragecenter.com – Various object and cloud storage items

    opencompute.org – Open Compute Project (OCP) servers and related topics

    opendatacenteralliance.org – Open Data Center Alliance (ODCA)

    openfabrics.org – Open-fabric software industry group

    opennetworking.org – Open Networking Foundation (ONF)

    openstack.org – OpenStack resources

    pcisig.com – Peripheral Component Interconnect (PCI) trade group

    reddit.com – Various IT, cloud, and data infrastructure topics

    scsita.org – SCSI trade association (SAS and others)

    SNIA.org – Storage Network Industry Association (SNIA)

    Speakingintech.com – Popular industry and data infrastructure podcast

    Storage Bibliography – Collection of Dr. J. Metz storage related content

    technet.microsoft.com – Microsoft TechNet data infrastructure–related topics

    thenvmeplace.com – various NVMe and related tools, topics and links

    thevpad.com – Collection of various virtualization and related sites

    thessdplace.com – various NVM, SSD, flash, 3D XPoint related topics, tools, links

    tpc.org – Transaction Performance Council benchmark site

    vmug.org – VMware User Groups (VMUG)

    wahlnetwork.com – Chris Whal Networking and related topics

    yellow-bricks.com – Duncan Epping VMware and related topics

    Additional Data Infrastructure Venues

    Additional useful data infrastructure related information can be found at BizTechMagazine, BrightTalk, ChannelProNetwork, ChannelproSMB, ComputerWeekly, Computerworld, CRN, CruxialCIO, Data Center Journal (DCJ), Datacenterknowledge, and DZone. Other good sourses include Edtechmagazine, Enterprise Storage Forum, EnterpriseTech, Eweek.com, FedTech, Google+, HPCwire, InfoStor, ITKE, LinkedIn, NAB, Network Computing, Networkworld, and nextplatform. Also check out Reddit, Redmond Magazine and Webinars, Spiceworks Forums, StateTech, techcrunch.com, TechPageOne, TechTarget Venues (various Search sites, e.g., SearchStorage, SearchSSD, SearchAWS, and others), theregister.co.uk, TheVarGuy, Tom’s Hardware, and zdnet.com, among many others.

    Where To Learn More

    Learn more about related technology, trends, tools, techniques, and tips with the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    The above is an evolving collection of recommended reading including what I have on my physical and virtual bookshelves, as well as list of web sites, blogs and podcasts worth listening, reading or watching. Watch for more items to be added to the book shelf soon, and if you have a suggested recommendation, add it to the comments below.

    By the way, if you have not heard, its #Blogtober, check out some of the other blogs and posts occurring during October here as part of your recommended reading list.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    PCIe Fundamentals Server Storage I/O Network Essentials

    Updated 8/31/19

    PCIe Fundamentals Server Storage I/O Network Essentials

    PCIe fundamentals data infrastructure trends

    This piece looks at PCIe Fundamentals topics for server, storage, I/O network data infrastructure environments. Peripheral Computer Interconnect (PCI) Express aka PCIe is a Server, Storage, I/O networking fundamentals component. This post is an excerpt from chapter 4 (Chapter 4: Servers: Physical, Virtual, Cloud, and Containers) of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017) Available via Amazon.com and other global venues. In this post, we look various PCIe fundamentals to learn and expand or refresh your server, storage, and I/O and networking tradecraft skills experience.

    PCIe fundamentals Server Storage I/O Fundamentals

    PCIe fundamental common server I/O component

    Common to all servers is some form of a main system board, which can range from a few square meters in supercomputers, data center rack, tower, and micro towers converged or standalone, to small Intel NUC (Next Unit of Compute), MSI and Kepler-47 footprint, or Raspberry Pi-type desktop servers and laptops. Likewise, PCIe is commonly found in storage and networking systems, appliances among other devices.

    For example, a blade server will have multiple server blades or modules, each with its motherboard, which shares a common back plane for connectivity. Another variation is a large server such as an IBM “Z” mainframe, Cray, or another supercomputer that consists of many specialized boards that function similar to a smaller-sized motherboard on a larger scale.

    Some motherboards also have mezzanine or daughter boards for attachment of additional I/O networking or specialized devices. The following figure shows a generic example of a two-socket, with eight-memory-channel-type server architecture.

    PCIe fundamentals SDDC, SDI, SDDI Server fundamentals
    Generic computer server hardware architecture. Source: Software Defined Data Infrastructure Essentials (CRC Press 2017)

    The above figure shows several PCIe, USB, SAS, SATA, 10 GbE LAN, and other I/O ports. Different servers will have various combinations of processor, and Dual Inline Memory Module (DIMM) Dynamic RAM (DRAM) sockets along with other features. What will also vary are the type and some I/O and storage expansion ports, power and cooling, along with management tools or included software.

    PCIe, Including Mini-PCIe, NVMe, U.2, M.2, and GPU

    At the heart of many servers I/O and connectivity solutions are the PCIe industry-standard interface (see PCIsig.com). PCIe is used to communicate with CPUs and the outside world of I/O networking devices. The importance of a faster and more efficient PCIe bus is to support more data moving in and out of servers while accessing fast external networks and storage.

    For example, a server with a 40-GbE NIC or adapter would have to have a PCIe port capable of 5 GB per second. If multiple 40-GbE ports are attached to a server, you can see where the need for faster PCIe interfaces come into play.

    As more VM are consolidated onto PM, as applications place more performance demand either regarding bandwidth or activity (IOPS, frames, or packets) per second, more 10-GbE adapters will be needed until the price of 40-GbE (also 25, 50 or 100 Gbe) becomes affordable. It is not if, but rather when you will grow into the performance needs on either a bandwidth/throughput basis or to support more activity and lower latency per interface.

    PCIe is a serial interface specified for how servers communicate between CPUs, memory, and motherboard-mounted as well as AiC devices. This communication includes support attachment of onboard and host bus adapter (HBA) server storage I/O networking devices such as Ethernet, Fibre Channel, InfiniBand, RapidIO, NVMe (cards, drives, and fabrics), SAS, and SATA, among other interfaces.

    In addition to supporting attachment of traditional LAN, SAN, MAN, and WAN devices, PCIe is also used for attaching GPU and video cards to servers. Traditionally, PCIe has been focused on being used inside of a given server chassis. Today, however, PCIe is being deployed on servers spanning nodes in dual, quad, or CiB, CI, and HCI or Software Defined Storage (SDS) deployments. Another variation of PCIe today is that multiple servers in the same rack or proximity can attach to shared devices such as storage via PCIe switches.

    PCIe components (hardware and software) include:

    • Hardware chipsets, cabling, connectors, endpoints, and adapters
    • Root complex and switches, risers, extenders, retimers, and repeaters
    • Software drivers, BIOS, and management tools
    • HBAs, RAID, SSD, drives, GPU, and other AiC devices
    • Mezzanine, mini-PCIe, M.2, NVMe U.2 (8639 drive form factor)

    There are many different implementations of PCIe, corresponding to generations representing speed improvements as well as physical packing options. PCIe can be deployed in various topologies, including a traditional model where an AiC such as GbE or Fibre Channel HBA connects the server to a network or storage device.

    Another variation is for a server to connect to a PCIe switch, or in a shared PCIe configuration between two or more servers. In addition to different generations and topologies, there are also various PCIe form factors and physical connectors (see the following figure), ranging from AiC of various length and height, as well as M.2 small-form-factor devices and U.2 (8639) drive form-factor device for NVMe, among others.

    Note that the presence of M.2 does not guarantee PCIe NVMe, as it also supports SATA.

    Likewise, different NVMe devices run at various PCIe speeds based on the number of lanes. For example, in the following figure, the U.2 (8639) device (looks like a SAS device) shown is a PCIe x4.

    SDDC, SDI, SDDI PCIe NVMe U.2 8639 drive fundamentals
    PCIe devices NVMe U.2, M.2, and NVMe AiC. (Source: StorageIO Labs.)

    PCIe leverages multiple serial unidirectional point-to-point links, known as lanes, compared to traditional PCI, which used a parallel bus design. PCIe interfaces can have one (x1), four (x4), eight (x8), sixteen (x16), or thirty-two (x32) lanes for data movement. Those PCIe lanes can be full-duplex, meaning data is sent and received at the same time, providing improved effective performance.

    PCIe cards are upward-compatible, meaning that an x4 can work in an x8, an x8 in an x16, and so forth. Note, however, that the cards will not perform any faster than their specified speed; an x4 in an x8 slot will only run at x8. PCIe cards can also have single, dual, or multiple external ports and interfaces. Also, note that there are still some motherboards with legacy PCI slots that are not interoperable with PCIe cards and vice versa.

    Note that PCIe cards and slots can be mechanically x1, x4, x8, x16, or x32, yet electrically (or signal) wired to a slower speed, based on the type and capabilities of the processor sockets and corresponding chipsets being used. For example, you can have a PCIe x16 slot (mechanical) that is wired for x8, which means it will only run at x8 speed.

    In addition to the differences between electrical and mechanical slots, also pay attention to what generation the PCIe slots are, such as Gen 2 or Gen 3 or higher. Also, some motherboards or servers will advertise multiple PCIe slots, but those are only active with a second or additional processor socket occupied by a CPU. For example, a PCIe card that has dual x4 external PCIe ports requiring full PCIe bandwidth will need at least PCIe x8 attachment in the server slot. In other words, for full performance, the external ports on a PCIe card or device need to match the external electrical and mechanical card type and vice versa.

    Recall big “B” as in Bytes vs. little “b” as in bits; for example, a PCIe Gen 3 x4 electrical could provide up to 4 GB/s bandwidth (your mileage and performance will vary), which translates to 8 × 4 GB or 32 Gbits/s. In the following table below, there is a mix of Big “B” Bytes per second and small “b” bits per second.

    Each generation of PCIe has improved on the previous one by increasing the effective speed of the links. Some of the speed improvements have come from faster clock rates while implementing lower overhead encoding (e.g., from 8 b/10 b to 128 b/130 b).

    For example, PCIe Gen 3 raw bit or line rate is 8 GT/s or 8 Gbps or about 2 GBps by using a 128 b/130 b encoding scheme that is very efficient compared to PCIe Gen 2 or Gen 1, which used an 8 b/10 b encoding scheme. With 8 b/10 b, there is a 20% overhead vs. a 1.5% overhead with 128 b/130 b (i.e., of 130 bits sent, 128 bits contain data, and 2 bits are for overhead).

    PCIe Gen 1

    PCIe Gen 2

    PCIe Gen 3

    PCIe Gen 4

    PCIe Gen 5

    Raw bit rate

    2.5 GT/s

    5 GT/s

    8 GT/s

    16 GT/s

    32 GT/s

    Encoding

    8 b/10 b

    8 b/10 b

    128 b/130 b

    128 b/130 b

    128 b/130 b

    x1 Lane bandwidth

    2 Gb/s

    4 Gb/s

    8 Gb/s

    16 Gb/s

    32 Gb/s

    x1 Single lane (one-way)

    ~250 MB/s

    ~500 MB/s

    ~1 GB/s

    ~2 GB/s

    ~4GB/s

    x16 Full duplex (both ways)

    ~8 GB/s

    ~16 GB/s

    ~32 GB/s

    ~64 GB/s

    ~128 GB/s

    Above Table: PCIe Generation and Sample Lane Comparison

    Note that PCIe Gen 3 is the currently generally available shipping technology with PCIe Gen 4 appearing in the not so distant future, with PCIe Gen 5 in the wings appearing a few more years down the road.

    By contrast, older generations of Fibre Channel and Ethernet also used 8 b/10 b, having switched over to 64 b/66 b encoding with 10 Gb and higher. PCIe, like other serial interfaces and protocols, can support full-duplex mode, meaning that data can be sent and received concurrently.

    PCIe Bit Rate, Encoding, Giga Transfers, and Bandwidth

    Let’s clarify something about data transfer or movement both internal and external to a server. At the core of a server, there is data movement within the sockets of the processors and its cores, as well as between memory and other devices (internal and external). For example, the QPI bus is used for moving data between some Intel processors whose performance is specified in giga transfers (GT).

    PCIe is used for moving data between processors, memory, and other devices, including internal and external facing devices. Devices include host bus adapters (HBAs), host channel adapters (HCAs), converged network adapters (CNAs), network interface cards (NICs) or RAID cards, and others. PCIe performance is specified in multiple ways, given that it has a server processor focus which involves GT for raw bit rate as well as effective bandwidth per lane.

    Note to keep in perspective PCIe mechanical as well as electrical lanes in that a card or slot may be advertised as say x8 mechanical (e.g., its physical slot form factor) yet only be x4 electrical (how many of those lanes are used or enabled). Also in the case of an adapter that has two or more ports, if the device is advertised as x8 does that mean it is x8 per port or x4 per port with an x8 connection to the PCIe bus.

    Effective bandwidth per lane can be specified as half- or full-duplex (data moving in one or both directions for send and receive). Also, effective bandwidth can be specified as a single lane (x1), four lanes (x4), eight lanes (x8), sixteen lanes (x16), or 32 lanes (x32), as shown in the above table. The difference in speed or bits moved per second between the raw bit or line rate, and the effective bandwidth per lane in a single direction (i.e., half-duplex) is the encoding that is common to all serial data transmissions.

    When data gets transmitted, the serializer/deserializer, or serdes, convert the bytes into a bit stream via encoding. There are different types of encoding, ranging from 8 b/10 b to 64 b/66 b and 128 b//130 b, shown in the following table.

    Single 1542-byte frame

    64 × 1542-byte frames

    Encoding Scheme

    Overhead

    Data Bits

    Encoding Bits

    Bits Transmitted

    Data Bits

    Encoding Bits

    Bits Transferred

    8 b/10 b

    20%

    12,336

    3,084

    15,420

    789,504

    197,376

    986,880

    64 b/66 b

    3%

    12,336

    386

    12,738

    789,504

    24,672

    814,176

    128 b/130 b

    1.5%

    12,336

    194

    12,610

    789,504

    12,336

    801,840

    Above Table: Low-Level Serial Encoding Data Transmit Efficiency

    In these encoding schemes, the smaller number represents the amount of data being sent, and the difference is the overhead. Note that this is different yet related to what occurs at a higher level with the various network protocols such as TCP/IP (IP). With IP, there is a data payload plus addressing and other integrity and management features in a given packet or frame.

    The 8-b/10-b, 64-b/66-b or 128-b/130-b encoding is at the lower physical layer. Thus, a small change there has a big impact and benefit when optimized. Table 4.2 shows comparisons of various encoding schemes using the example of moving a single 1542-byte packet or frame, as well as sending (or receiving) 64 packets or frames that are 1542 bytes in size.

    Why 1542? That is a standard IP packet including data and protocol framing without using jumbo frames (MTU or maximum transmission units).

    What does this have to do with PCIe? GbE, 10-GbE, 40-GbE, and other physical interfaces that are used for moving TCP/IP packets and frames interface with servers via PCIe.

    This encoding is important as part of server storage I/O tradecraft regarding understanding the impact of performance and network or resource usage. It also means understanding why there are fewer bits per second of effective bandwidth (independent of compression or deduplication) vs. line rate in either half- or full-duplex mode.

    Another item to note is that looking at encoding such as the example given in the above table shows how a relatively small change at a large scale can have a big effective impact benefit. If the bits and bytes encoding efficiency and effectiveness scenario in Table 4.2 do not make sense, then try imagining 13 MINI Cooper automobiles each with eight people in it (yes, that would be a tight fit) end to end on the same road.

    Now imagine a large bus that takes up much less length on the road than the 13 MINI Coopers. The bus holds 128 people, who would still be crowded but nowhere near as cramped as eight people in a MINI, plus 24 additional people can be carried on the bus. That is an example of applying basic 8-b/10-b encoding (the MINI) vs. applying 128-b/130-b encoding (the bus) and is also similar to PCIe G3 and G4, which use 128-b/130-b encoding for data movement.

    PCIe Topologies

    The basic PCIe topology configuration has one or more devices attached to the root complex shown in the following figure via an AiC or onboard device connector. Examples of AiC and motherboard-mounted devices that attach to PCIe root include LAN or SAN HBA, networking, RAID, GPU, NVM or SSD, among others. At system start-up, the server initializes the PCIe bus and enumerates the devices found with their addresses.

    PCIe devices attach (shown in the following figure) to a bus that communicates with the root complex that connects with processor CPUs and memory. At the other end of a PCIe device is an end-point target, a PCIe switch that in turn has end-point targets attached. From a software standpoint, hypervisor or operating system device drivers communicate with the PCI devices that in turn send or receive data or perform other functions.

    SDDC, SDI, SDDI PCIe fundamentals
    Basic PCIe root complex with a PCIe switch or expander.

    Note that in addition to PCIe AiC such as HBAs, GPU, and NVM SSD, among others that install into PCIe slots, servers also have converged storage or disk drive enclosures that support a mix of SAS, SATA, and PCIe. These enclosure backplanes have a connector that attaches to a SAS or SATA onboard port, or a RAID card, as well as to a PCIe riser card or motherboard connector. Depending on what type of drive is installed in the connector, either the SAS, SATA, or NVMe (AiC, U.2, and M2) using PCIe communication paths are used.

    In addition to traditional and switched PCIe, using PCIe switches as well as nontransparent bridging (NTB), various other configurations can be deployed. These include server to server for clustering, failover, or device sharing as well as fabrics. Note that this also means that while traditionally found inside a server, PCIe can today use an extender, retimer, and repeaters extended across servers within a rack or cabinet.

    A nontransparent bridge (NTB) is a point-to-point connection between two PCIe-based systems that provide electrical isolation yet functions as a transport bridge between two different address domains. Hosts on either side of the NTB see their respective memory or I/O address space. The NTB presents an endpoint exposed to the local system where writes are mirrored to memory on the remote system to allow the systems to communicate and share devices using associated device drivers. For example, in the following figure, two servers, each with a unique PCIe root complex, address, and memory map, are shown using NTB to any communication between the systems while maintaining data integrity.

    SDDC, SDI, SDDI PCIe two server fundamentals
    PCIe dual server example using NTB along with switches.

    General PCIe considerations (slots and devices) include:

    • Power consumption (and heat dissipation)
    • Physical and software plug-and-play (good interoperability)
    • Drivers (in-the-box, built into the OS, or add-in)
    • BIOS, UEFI, and firmware being current versions
    • Power draw per card or adapters
    • Type of processor, socket, and support chip (if not an onboard processor)
    • Electrical signal (lanes) and mechanical form factor per slot
    • Nontransparent bridge and root port (RP)
    • PCI multi-root (MR), single-root (SR), and hot plug
    • PCIe expansion chassis (internal or external)
    • External PCIe shared storage

    Various operating system and hypervisor commands are available for viewing and managing PCIe devices. For example, on Linux, the “lspci” and “lshw–c pci” commands displays PCIe devices and associated information. On a VMware ESXi host, the “esxcli hardware pci list” command will show various PCIe devices and information, while on Microsoft Windows systems, “device manager” (GUI) or “devcon” (command line) will show similar information.

    Who Are Some PCIe Fundamentals Vendors and Service Providers

    While not an exhaustive list, here is a sampling of some vendors and service providers involved in various ways with PCIe from solutions to components to services to trade groups include Amphenol (connectors and cables), AWS (cloud data infrastructure services), Broadcom (PCIe components), Cisco (servers), DataOn (servers), Dell EMC (servers, storage, software), E8 (storage software), Excelero (storage software), HPE (storage, servers), Huawei (storage, servers), IBM, Intel (storage, servers, adapters), Keysight (test equipment and tools).

    Others include Lenovo (servers), Liqid (composable data infrastructure), Mellanox (server and storage adapters), Micron (storage devices), Microsemi (PCIe components), Microsoft (Cloud and Software including S2D), Molex (connectors, cables), NetApp, NVMexpress.org (NVM Express trade group organizations), Open Compute Project (server, storage, I/O network industry group), Oracle, PCISIG (PCIe industry trade group), Samsung (storage devices), ScaleMP (composable data infrastructure), Seagate (storage devices), SNIA (industry trade group), Supermicro (servers), Tidal (composable data infrastructure), Vantar (formerly known as HDS), VMware (Software including vSAN), and WD among others.

    Where To Learn More

    Learn more about related technology, trends, tools, techniques, and tips with the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    PCIe fundamentals are resources for building legacy and software-defined data infrastructures (SDDI), software-defined infrastructures (SDI), data centers and other deployments from laptop to large scale, hyper-scale cloud service providers. Learn more about Servers: Physical, Virtual, Cloud, and Containers in chapter 4 of my new book Software Defined Data Infrastructure Essentials (CRC Press 2017) Available via Amazon.com and other global venues. Meanwhile, PCIe fundamentals continues to evolve as a Server, Storage, I/O networking fundamental component.

    Ok, nuff said, for now.
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    Cloud Conversations AWS Azure Service Maps via Microsoft

    Cloud Conversations AWS Azure Service Maps via Microsoft

    server storage I/O data infrastructure trends

    Updated 1/21/2018

    Microsoft has created an Amazon Web Service AWS Azure Service Map. The AWS Azure Service Map is a list created by Microsoft looks at corresponding services of both cloud providers.

    Azure AWS service map via Microsoft.com
    Image via Azure.Microsoft.com

    Note that this is an evolving work in progress from Microsoft and use it as a tool to help position the different services from Azure and AWS.

    Also note that not all features or services may not be available in different regions, visit Azure and AWS sites to see current availability.

    As with any comparison they are often dated the day they are posted hence this is a work in progress. If you are looking for another Microsoft created why Azure vs. AWS then check out this here. If you are looking for an AWS vs. Azure, do a simple Google (or Bing) search and watch all the various items appear, some sponsored, some not so sponsored among others.

    Whats In the Service Map

    The following AWS and Azure services are mapped:

    • Marketplace (e.g. where you select service offerings)
    • Compute (Virtual Machines instances, Containers, Virtual Private Servers, Serverless Microservices and Management)
    • Storage (Primary, Secondary, Archive, Premium SSD and HDD, Block, File, Object/Blobs, Tables, Queues, Import/Export, Bulk transfer, Backup, Data Protection, Disaster Recovery, Gateways)
    • Network & Content Delivery (Virtual networking, virtual private networks and virtual private cloud, domain name services (DNS), content delivery network (CDN), load balancing, direct connect, edge, alerts)
    • Database (Relational, SQL and NoSQL document and key value, caching, database migration)
    • Analytics and Big Data (data warehouse, data lake, data processing, real-time and batch, data orchestration, data platforms, analytics)
    • Intelligence and IoT (IoT hub and gateways, speech recognition, visualization, search, machine learning, AI)
    • Management and Monitoring (management, monitoring, advisor, DevOps)
    • Mobile Services (management, monitoring, administration)
    • Security, Identity and Access (Security, directory services, compliance, authorization, authentication, encryption, firewall
    • Developer Tools (workflow, messaging, email, API management, media trans coding, development tools, testing, DevOps)
    • Enterprise Integration (application integration, content management)

    Down load a PDF version of the service map from Microsoft here.

    Where To Learn More

    Learn more about related technology, trends, tools, techniques, and tips with the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    On one hand this can and will likely be used as a comparison however use caution as both Azure and AWS services are rapidly evolving, adding new features, extending others. Likewise the service regions and site of data centers also continue to evolve thus use the above as a general guide or tool to help map what service offerings are similar between AWS and Azure.

    By the way, if you have not heard, its Blogtober, check out some of the other blogs and posts occurring during October here.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Announcing Software Defined Data Infrastructure Essentials Book by Greg Schulz

    New SDDI Essentials Book by Greg Schulz of Server StorageIO

    Cloud, Converged, Virtual Fundamental Server Storage I/O Tradecraft

    server storage I/O data infrastructure trends

    Update 1/21/2018
    Over the past several months I have posted, commenting, presenting and discussing more about Software Defined Data Infrastructure Essentials aka SDDI or SDDC and SDI. Now it is time to announce my new book (my 4th solo project), Software Defined Data Infrastructure Essentials Book (CRC Press). Software Defined Data Infrastructure Essentials is now generally available at various global venues in hardcopy, hardback print as well as various electronic versions including via Amazon and CRC Press among others. For those attending VMworld 2017 in Las Vegas, I will be doing a book signing, meet and greet at 1PM Tuesday August 29 in the VMworld book store, as well as presenting at various other fall industry events.

    Software Defined Data Infrastructure Essentials Book Announcement

    (Via Businesswire) Stillwater, Minnesota – August 23, 2017  – Server StorageIO, a leading independent IT industry advisory and consultancy firm, in conjunction with publisher CRC Press, a Taylor and Francis imprint, announced the release and general availability of “Software-Defined Data Infrastructure Essentials,” a new book by Greg Schulz, noted author and Server StorageIO founder.

    Software Defined Data Infrastructure Essentials

    The Software Defined Data Infrastructure Essentials book covers physical, cloud, converged (and hyper-converged), container, and virtual server storage I/O networking technologies, revealing trends, tools, techniques, and tradecraft skills.

    Data Infrastructures Protect Preserve Secure and Serve Information
    Various IT and Cloud Infrastructure Layers including Data Infrastructures

    From cloud web scale to enterprise and small environments, IoT to database, software-defined data center (SDDC) to converged and container servers, flash solid state devices (SSD) to storage and I/O networking,, the book helps develop or refine hardware, software, services and management experiences, providing real-world examples for those involved with or looking to expand their data infrastructure education knowledge and tradecraft skills.

    Software Defined Data Infrastructure Essentials book topics include:

      • Cloud, Converged, Container, and Virtual Server Storage I/O networking
      • Data protection (archive, availability, backup, BC/DR, snapshot, security)
      • Block, file, object, structured, unstructured and data value
      • Analytics, monitoring, reporting, and management metrics
      • Industry trends, tools, techniques, decision making
      • Local, remote server, storage and network I/O troubleshooting
      • Performance, availability, capacity and  economics (PACE)

    Where To Purchase Your Copy

    Order via Amazon.com and CRC Press along with Google Books among other global venues.

    What People Are Saying About Software Defined Data Infrastructure Essentials Book

    “From CIOs to operations, sales to engineering, this book is a comprehensive reference, a must-read for IT infrastructure professionals, beginners to seasoned experts,” said Tom Becchetti, advisory systems engineer.

    “We had a front row seat watching Greg present live in our education workshop seminar sessions for ITC professionals in the Netherlands material that is in this book. We recommend this amazing book to expand your converged and data infrastructure knowledge from beginners to industry veterans.”

    Gert and Frank Brouwer – Brouwer Storage Consultancy

    “Software-Defined Data Infrastructures provides the foundational building blocks to improve your craft in several areas including applications, clouds, legacy, and more.  IT professionals, as well as sales professionals and support personal, stand to gain a great deal by reading this book.”

    Mark McSherry- Oracle Regional Sales Manager

    “Greg Schulz has provided a complete ‘toolkit’ for storage management along with the background and framework for the storage or data infrastructure professional (or those aspiring to become one).”
    Greg Brunton – Experienced Storage and Data Management Professional

    “Software-defined data infrastructures are where hardware, software, server, storage, I/O networking and related services converge inside data centers or clouds to protect, preserve, secure and serve applications and data,” said Schulz.  “Both readers who are new to data infrastructures and seasoned pros will find this indispensable for gaining and expanding their knowledge.”

    SDDI and SDDC components

    More About Software Defined Data Infrastructure Essentials
    Software Defined Data Infrastructures (SDDI) Essentials provides fundamental coverage of physical, cloud, converged, and virtual server storage I/O networking technologies, trends, tools, techniques, and tradecraft skills. From webscale, software-defined, containers, database, key-value store, cloud, and enterprise to small or medium-size business, the book is filled with techniques, and tips to help develop or refine your server storage I/O hardware, software, Software Defined Data Centers (SDDC), Software Data Infrastructures (SDI) or Software Defined Anything (SDx) and services skills. Whether you are new to data infrastructures or a seasoned pro, you will find this comprehensive reference indispensable for gaining as well as expanding experience with technologies, tools, techniques, and trends.

    Software Defined Data Infrastructure Essentials SDDI SDDC content

    This book is the definitive source providing comprehensive coverage about IT and cloud Data Infrastructures for experienced industry experts to beginners. Coverage of topics spans from higher level applications down to components (hardware, software, networks, and services) that get defined to create data infrastructures that support business, web, and other information services. This includes Servers, Storage, I/O Networks, Hardware, Software, Management Tools, Physical, Software Defined Virtual, Cloud, Docker, Containers (Docker and others) as well as Bulk, Block, File, Object, Cloud, Virtual and software defined storage.

    Additional topics include Data protection (Availability, Archiving, Resiliency, HA, BC, BR, DR, Backup), Performance and Capacity Planning, Converged Infrastructure (CI), Hyper-Converged, NVM and NVMe Flash SSD, Storage Class Memory (SCM), NVMe over Fabrics, Benchmarking (including metrics matter along with tools), Performance Capacity Planning and much more including whos doing what, how things work, what to use when, where, why along with current and emerging trends.

    Book Features

    ISBN-13: 978-1498738156
    ISBN-10: 149873815X
    Hardcover: 672 pages
    (Available in Kindle and other electronic formats)
    Over 200 illustrations and 70 plus tables
    Frequently asked Questions (and answers) along with many tips
    Various learning exercises, extensive glossary and appendices
    Publisher: Auerbach/CRC Press Publications; 1 edition (June 19, 2017)
    Language: English

    SDDI and SDDC toolbox

    Where To Learn More

    Learn more about related technology, trends, tools, techniques, and tips with the following links.

    Data Infrastructures Protect Preserve Secure and Serve Information
    Various IT and Cloud Infrastructure Layers including Data Infrastructures

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Data Infrastructures exist to protect, preserve, secure and serve information along with the applications and data they depend on. With more data being created at a faster rate, along with the size of data becoming larger, increased application functionality to transform data into information means more demands on data infrastructures and their underlying resources.

    Software-Defined Data Infrastructure Essentials: Cloud, Converged, and Virtual Fundamental Server Storage I/O Tradecraft is for people who are currently involved with or looking to expand their knowledge and tradecraft skills (experience) of data infrastructures. Software-defined data centers (SDDC), software data infrastructures (SDI), software-defined data infrastructure (SDDI) and traditional data infrastructures are made up of software, hardware, services, and best practices and tools spanning servers, I/O networking, and storage from physical to software-defined virtual, container, and clouds. The role of data infrastructures is to enable and support information technology (IT) and organizational information applications.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    Everything is not the same in business, organizations, IT, and in particular servers, storage, and I/O. This means that there are different audiences who will benefit from reading this book. Because everything and everybody is not the same when it comes to server and storage I/O along with associated IT environments and applications, different readers may want to focus on various sections or chapters of this book.

    If you are looking to expand your knowledge into an adjacent area or to understand whats under the hood, from converged, hyper-converged to traditional data infrastructures topics, this book is for you. For experienced storage, server, and networking professionals, this book connects the dots as well as provides coverage of virtualization, cloud, and other convergence themes and topics.

    This book is also for those who are new or need to learn more about data infrastructure, server, storage, I/O networking, hardware, software, and services. Another audience for this book is experienced IT professionals who are now responsible for or working with data infrastructure components, technologies, tools, and techniques.

    Learn more here about Software Defined Data Infrastructure (SDDI) Essentials book along with cloud, converged, and virtual fundamental server storage I/O tradecraft topics, order your copy from Amazon.com or CRC Press here, and thank you in advance for learning more about SDDI and related topics.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Chelsio Storage over IP and other Networks Enable Data Infrastructures

    Chelsio Storage over IP Enable Data Infrastructures

    server storage I/O data infrastructure trends

    Chelsio and Storage over IP (SoIP) continue to enable Data Infrastructures from legacy to software defined virtual, container, cloud as well as converged. This past week I had a chance to visit with Chelsio to discuss data infrastructures, server storage I/O networking along with other related topics. More on Chelsio later in this post, however, for now lets take a quick step back and refresh what is SoIP (Storage over IP) along with Storage over Ethernet (among other networks).

    Data Infrastructures Protect Preserve Secure and Serve Information
    Various IT and Cloud Infrastructure Layers including Data Infrastructures

    Server Storage over IP Revisited

    There are many variations of SoIP from network attached storage (NAS) file based processing including NFS, SAMBA/SMB (aka Windows File sharing) among others. In addition there is various block such as SCSI over IP (e.g. iSCSI), along with object via HTTP/HTTPS, not to mention the buzzword bingo list of RoCE, iSER, iWARP, RDMA, DDPK, FTP, FCoE, IFCP, and SMB3 direct to name a few.

    Who is Chelsio

    For those who are not aware or need a refresher, Chelsio is involved with enabling server storage I/O by creating ASICs (Application Specific Integrated Circuits) that do various functions offloading those from the host server processor. What this means for some is a throw back to the early 2000s of the TCP Offload Engine (TOE) era where various processing to handle regular along with iSCSI and other storage over Ethernet and IP could be accelerated.

    Chelsio data infrastructure focus

    Chelsio ecosystem across different data infrastructure focus areas and application workloads

    As seen in the image above, certainly there is a server and storage I/O network play with Chelsio, along with traffic management, packet inspection, security (encryption, SSL and other offload), traditional, commercial, web, high performance compute (HPC) along with high profit or productivity compute (the other HPC). Chelsio also enables data infrastructures that are part of physical bare metal (BM), software defined virtual, container, cloud, serverless among others.

    Chelsio server storage I/O focus

    The above image shows how Chelsio enables initiators on server and storage appliances as well as targets via various storage over IP (or Ethernet) protocols.

    Chelsio enabling various data center resources

    Chelsio also plays in several different sectors from *NIX to Windows, Cloud to Containers, Various processor architectures and hypervisors.

    Chelsio ecosystem

    Besides diverse server storage I/O enabling capabilities across various data infrastructure environments, what caught my eye with Chelsio is how far they, and storage over IP have progressed over the past decade (or more). Granted there are faster underlying networks today, however the offload and specialized chip sets (e.g. ASICs) have also progressed as seen in the above and next series of images via Chelsio.

    The above showing TCP and UDP acceleration, the following show Microsoft SMB 3.1.1 performance something important for doing Storage Spaces Direct (S2D) and Windows-based Converged Infrastructure (CI) along with Hyper Converged Infrastructures (HCI) deployments.

    Chelsio software environments

    Something else that caught my eye was iSCSI performance which in the following shows 4 initiators accessing a single target doing about 4 million IOPs (reads and writes), various size and configurations. Granted that is with a 100Gb network interface, however it also shows that potential bottlenecks are removed enabling that faster network to be more effectively used.

    Chelsio server storage I/O performance

    Moving on from TCP, UDP and iSCSI, NVMe and in particular NVMe over Fabric (NVMeoF) have become popular industry topics so check out the following. One of my comments to Chelsio is to add host or server CPU usage to the following chart to help show the story and value proposition of NVMe in general to do more I/O activity while consuming less server-side resources. Lets see what they put out in the future.

    Chelsio

    Ok, so Chelsio does storage over IP, storage over Ethernet and other interfaces accelerating performance, as well as regular TCP and UDP activity. One of the other benefits of what Chelsio and others are doing with their ASICs (or FPGA by some) is to also offload processing for security among other topics. Given the increased focus around server storage I/O and data infrastructure security from encryption to SSL and related usage that requires more resources, these new ASIC such as from Chelsio help to offload various specialized processing from the server.

    The customer benefit is that more productive application work can be done by their servers (or storage appliances). For example, if you have a database server, that means more product ivy data base transactions per second per licensed software. Put another way, want to get more value out of your Oracle, Microsoft or other vendors software licenses, simple, get more work done per server that is licensed by offloading and eliminate waits or other bottlenecks.

    Using offloads and removing server bottlenecks might seem like common sense however I’m still amazed that the number of organizations who are more focused on getting extra value out of their hardware vs. getting value out of their software licenses (which might be more expensive).

    Chelsio

    Where To Learn More

    Learn more about related technology, trends, tools, techniques, and tips with the following links.

    Data Infrastructures Protect Preserve Secure and Serve Information
    Various IT and Cloud Infrastructure Layers including Data Infrastructures

    What This All Means

    Data Infrastructures exist to protect, preserve, secure and serve information along with the applications and data they depend on. With more data being created at a faster rate, along with the size of data becoming larger, increased application functionality to transform data into information means more demands on data infrastructures and their underlying resources.

    This means more server I/O to storage system and other servers, along with increased use of SoIP as well as storage over Ethernet and other interfaces including NVMe. Chelsio (and others) are addressing the various application and workload demands by enabling more robust, productive, effective and efficient data infrastructures.

    Check out Chelsio and how they are enabling storage over IPO (SoIP) to enable Data Infrastructures from legacy to software defined virtual, container, cloud as well as converged, oh, and thanks Chelsio for being able to use the above images.

    Ok, nuff said, for now.
    Gs

    Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    Data Infrastructure Industry Trends WekaIO Matrix Software Defined Storage SDS

    WekaIO Matrix Scale Out Software Defined Storage SDS

    server storage I/O trends

    Updated 2/11/2018

    WekaIO Matrix is a scale out software defined solution (SDS).

    WekaIO Matrix software defined scale out storage SDS

    This Server StorageIO Industry Trends Perspective report looks at common issues, trends, and how to address different application server storage I/O challenges. In this report, we look at WekaIO Matrix, an elastic, flexible, highly scalable easy to use (and manage) software defined (e.g. software based) storage solution. WekaIO Matrix enables flexible elastic scaling with stability and without compromise.

    Matrix is a new scale out software defined storage solution that:

    • Installs on bare metal, virtual or cloud servers
    • Has POSIX, NFS, SMB, and HDFS storage access
    • Adaptable performance for little and big data
    • Tiering of flash SSD and cloud object storage
    • Distributed resilience without compromise
    • Removes complexity of traditional storage
    • Deploys on bare metal, virtual and cloud environments

    Where To Learn More

    View additional SDS and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Read more about WekaIO Matrix in this (free, no registration required) Server StorageIO Industry Trends Perspective (ITP) Report compliments of WekaIO.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Zombie Technology Life after Death Tape Is Still Alive

    Zombie Technology Life after Death Tape Is Still Alive

    server storage I/O data infrastructure trends

    A Zombie Technology is one declared dead yet has Life after Death such as Tape which is still alive.

    zombie technology
    Image via StorageIO.com (licensed for use from Shutterstock.com)

    Tapes Evolving Role

    Sure we have heard for decade’s about the death of tape, and someday it will be dead and buried (I mean really dead), no longer used, buried, existing only in museums. Granted tape has been on the decline for some time, and even with many vendors exiting the marketplace, there remains continued development and demand within various data infrastructure environments, including software defined as well as legacy.

    data infrastructures

    Tape remains viable for some environments as part of an overall memory data storage hierarchy including as a portability (transportable) as well as bulk storage medium.

    memory data storage hirearchy classes tiers

    Keep in mind that tapes role as a data storage medium also continues to change as does its location. The following table (via Software Defined Data Infrastructure Essentials (CRC Press)) Chapter 10 shows examples of various data movements from source to destination. These movements include migration, replication, clones, mirroring, and backup, copies, among others. The source device can be a block LUN, volume, partition, physical or virtual drive, HDD or SSD, as well as a file system, object, or blob container or bucket. An example of the modes in Table 10.1 include D2D backup from local to local (or remote) disk (HDD or SSD) storage or D2D2D copy from local to local storage, then to the remote.

    Mode – Description
    D2D – Data gets copied (moved, migrated, replicated, cloned, backed up) from source storage (HDD or SSD) to another device or disk (HDD or SSD)-based device
    D2C – Data gets copied from a source device to a cloud device.
    D2T – Data gets copied from a source device to a tape device (drive or library).
    D2D2D – Data gets copied from a source device to another device, and then to another device.
    D2D2T – Data gets copied from a source device to another device, then to tape.
    D2D2C   Data gets copied from a source device to another device, then to cloud.
    Data Movement Modes from Source to Destination

    Note that movement from source to the target can be a copy, rsync, backup, replicate, snapshot, clone, robocopy among many other actions. Also, note that in the earlier examples there are occurrences of tape existing in clouds (e.g. its place) and use changing.  Tip – In the past, “disk” usually referred to HDD. Today, however, it can also mean SSD. Think of D2D as not being just HDD to HDD, as it can also be SSD to SSD, Flash to Flash (F2F), or S2S among many other variations if you prefer (or needed).

    Image via Tapestorage.org

    For those still interested in tape, check out the Active Archive Alliance recent posts (here), as well as the 2017 Tape Storage Council Memo and State of their industry report (here). While lower end-tape such as LTO (which is not exactly the low-end it was a decade or so ago) continues to evolve, things may not be as persistent for tape at the high-end. With Oracle (via its Sun/StorageTek acquisition) exiting the high-end (e.g. Mainframe focused) tape business, that leaves mainly IBM as a technology provider.

    Image via Tapestorage.org

    With a single tape device (e.g. drive) vendor at the high-end, that could be the signal for many organizations that it is time to finally either move from tape or at least to LTO (linear tape open) as a stepping stone (e.g. phased migration). The reason not being technical rather business in that many organizations need to have a secondary or competitive offering or go through an exception process.

    On the other hand, just as many exited the IBM mainframe server market (e.g. Fujitsu/Amdahl, HDS, NEC), big blue (e.g. IBM) continues to innovate and drive both revenue and margin from those platforms (hardware, software, and services). This leads me to believe that IBM will do what it can to keep its high-end tape customers supported while also providing alternative options.

    Where To Learn More

    Learn more about related technology, trends, tools, techniques, and tips with the following links.

    What This All Means

    I would not schedule the last tape funeral just yet, granted there will continue to be periodic wakes and send off over the coming decade. Tape remains for some environments a viable data storage option when used in new ways, as well as new locations complementing flash SSD and other persistent memories aka storage class memories along with HDD.

    Personally, I have been directly tape free for over 14 years. Granted, I have data in some clouds and object storage that may exist on a very cold data storage tier possibly maybe on tape that is transparent to my use. However just because I do not physically have tape, does not mean I do not see the need why others still have to or prefer to use it for different needs.

    Also, keep in mind that tape continues to be used as an economic data transport for bulk movement of data for some environments. Meanwhile for those who only want, need or wish tape to finally go away, close your eyes, click your heels together and repeat your favorite tape is not alive chant three (or more) times. Keep in mind that HDDs are keeping tape alive by off loading some functions, while SSDs are keeping HDDs alive handling tasks formerly done by spinning media. Meanwhile, tape can and is still called upon by some organizations to protect or enable bulk recovery for SSD and HDDs even in cloud environments, granted in new different ways.

    What this all means is that as a zombie technology having been declared dead for decades yet still live there is life after death for tape, which is still alive, for now.

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.