2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

Here is the 2018 Hot Popular New Trending Data Infrastructure Vendors To Watch which includes startups as well as established vendors doing new things. This piece follows last year’s hot favorite trending data infrastructure vendors to watch list (here), as well as who will be top of storage world in a decade piece here.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Data Infrastructures Support Information Systems Applications and Their Data

Data Infrastructures are what exists inside physical data centers and cloud availability zones (AZ) that are defined to provide traditional, as well as cloud services. Cloud and legacy data infrastructures are combined by hardware (server, storage, I/O network), software along with management tools, policies, tradecraft techniques (skills), best practices to support applications and their data. There are different types of data infrastructures to meet the needs of various environments that range in size, scope, focus, application workloads, along with Performance and capacity.

Another important aspect of data infrastructures is that they exist to protect, preserve, secure and serve applications that transform data into information. This means that availability and Data Protection including archive, backup, business continuance (BC), business resiliency (BR), disaster recovery (DR), privacy and security among other related topics, technology, techniques, and trends are essential data infrastructure topics.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Different timelines of adoption and deployment for various audiences

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

Some of those on this year’s list are focused on different technology areas, while others on size or types of vendors, suppliers, service providers. Others on the list are focused on who is new, startup, evolving, or established which varies from if you are an industry insider or IT customer environment. Meanwhile others new and some are established doing new things, mix of some you may not have heard of for those who want or need to have the most current list to rattle off startups for industry adoption (and deployment), as well as what some established players are doing that might lead to customer deployment (and adoption).

AMD – The AMD EPYC family of processors is opening up new opportunities for AMD to challenge Intel among others for a more significant share of the general-purpose compute market in support of data center and data infrastructure markets. An advantage that AMD has and is playing to in the industry speeds feeds, slots and watts price performance game is the ability to support more memory and PCIe lanes per socket than others including Intel. Keep in mind that PCIe lanes will become even more critical as NVMe deployment increases, as well as the use of GPU’s and faster Ethernet among other devices. Name brand vendors including Dell and HPE among others have announced or are shipping AMD EPYC based processors.

Aperion – Cloud and managed service provider with diverse capabilities.

Amazon Web Services (AWS) – Continues to expand its footprint regarding regions, availability zones (AZ) also known as data centers in regions, as well as some services along with the breadth of those capabilities. AWS has recently announced a new Snowball Edge (SBE) which in the past has been a data migration appliance now enhanced with on-prem Elastic Cloud Compute (EC2) capabilities. What this means is that AWS can put on-prem compute capabilities as part of a storage appliance for short-term data movement, migration, conversion, importing of virtual machines and other items.

On the other hand, AWS can also be seen as using SBE as a first entry to placing equipment on-prem for hybrid clouds, or, converged infrastructure (CI), hyper-converged infrastructure (HCI), cloud in a box similar to Microsoft Azure Stack, as well as CI/HCI solutions from others.

My prediction near term, however, is that CI/HCI vendors will either ignore SBE, downplay it, create some new marketing on why it is not CI/HCI or fud about vendor lock-in. In other words, make some popcorn and sit back, watch the show.

Backblaze – Low-cost, high-capacity cloud storage for backup and archiving provider known for their quarterly disk drive reliability ratings (or failure) reports. They have been around for a while, have a good reputation among those who use their services for being a low-cost alternative to the larger providers.

Barefoot networks – Some of you may already be aware of or following Barefoot Networks, while others may not have heard of them outside of the networking space. They have some impressive capabilities, are new, you probably have not heard of them, thus an excellent addition to this list.

Cloudian – Continue to evolve and no longer just another object storage solution, Cloudian has been expanding via organic technology development, as well as acquisitions giving them a broad portfolio of software-defined storage and tiering from on-prem to the cloud, block, file and object access.

Cloudflare – Not exactly a startup, some of you may know or are using Cloudflare, while to others, their role as a web cache, DNS, and other service is transparent. I have been using Cloudflare on my various sites for over a year, and like the security, DNS, cache and analytics tools they provide as a customer.

Cobalt Iron – For some, they might be new, Software-defined Data protection and management is the name of the game over at Cobalt Iron which has been around a few years under the radar compared to more popular players. If you have or are involved with IBM Tivoli aka TSM based backup and data protection among others, check out the exciting capabilities that Cobalt can bring to the table.

CTERA – Having been around for a while, to some they might not be a startup, on the other hand, they may be new to others while offering new data and file management options to others.

DataCore – You might know of DataCore for their software-defined storage and past storage hypervisor activity. However, they have a new piece of software MaxParallel that boost server storage I/O performance. The software installs on your Windows Server instance (bare metal, VM, or cloud instance) and shows you performance with and without acceleration which you can dynamically turn off and off.

DataDirect Networks (DDN) – Recently acquired Lustre assets from Intel, now picking up the storage startup Tintri pieces after it ceased operations. What this means is that while beefing up their traditional High-Performance Compute (HPC) and Super Compute (SC) focus, DDN is also expanding into broader markets.

Dell Technologies – At its recent Dell Technology World event in Las Vegas during late April, early May 2018, several announcements were made, including some tied to emerging Gen-Z along with composability. More recently, Dell Technologies along with VMware announced business structure and finance changes. Changes include VMware declaring a dividend, Dell Technologies being its largest shareholder will use proceeds to fund restricting and debt service. Read more about VMware and Dell Technology business and financial changes here.

Densify – With a name like Densify no surprise they propose to drive densification and automation with AI-powered deep learning to optimize application resource use across on-prem software-defined virtual as well as cloud instances and containers.

FlureDB – If you are into databases (SQL or NoSQL), as well as Blockchain or distributed ledgers, check out FlureDB.

Innovium.com – When it comes to data infrastructure and data center networking, Innovium is probably not on your radar, however, keep an eye on these folks and their TERALYNX switching silicon to see where it ends up given their performance claims.

Komprise – File, and data management solutions including tiering along with partners such as IBM.

Kubernetes – A few years ago OpenStack, then Docker containers was the favorite and trending discussion topic, then Mesos and along comes Kubernetes. It’s safe to say, at least for now, Kubernetes is settling in as a preferred open source industry and customer defecto choice (I want to say standard, however, will hold off on that for now) for container and related orchestration management. Besides, do it yourself (DiY) leveraging open source, there are also managed AWS Elastic Kubernetes Service (EKS), Azure Kubernetes Services (AKS), Google Kubernetes Engine (GKE), and VMware Pivotal Container Service (PKS) among others. Besides Azure, Microsoft also includes Kubernetes support (along with Docker and Windows containers) as part of Windows Servers.

ManageEngine (part of Zoho) – Has data infrastructure monitoring technology called OpManager for keeping an eye on networking.

Marvel – Marvel may not be a familiar name (don’t confuse with comics), however, has been a critical component supplier to partners whose server or storage technology you may be familiar with or have yourself. Server, Storage, I/O Networking chip maker has closed on its acquisition of Cavium (who previously bought Qlogic among others). The combined company is well positioned as a key data infrastructure component supplier to various partners spanning servers, storage, I/O networking including Fibre Channel (FC), Ethernet, InfiniBand, NVMe (and NVMeoF) among others.

Mellanox – Known for their InfiniBand adapters, switches, and associated software, along with growing presence in RDMA over Converged Ethernet (RoCE), they are also well positioned for NVMe over Fabrics among other growth opportunities following recent boardroom updates, along with technology roadmap’s.

Microsoft – Azure public cloud continues to evolve similarly to AWS with more region locations, availability zone (AZ) data centers, as well as features and extensions. Microsoft also introduced about a year ago its hybrid on-prem CI/HCI cloud in a box platform appliance Azure Stack (read about my test drive here). However, there is more to Microsoft than just their current cloud first focus which means Windows (desktop), as well as Server, are also evolving. Currently, in public preview, Windows Server 2019 insiders build available to try out many new capabilities, some of which were covered in the recent free Microsoft Virtual Summit held in June. Key themes of Windows Server 2019 include security, performance, hybrid cloud, containers, software-defined storage and much more.

Microsemi – Has been around for a while is the combination of some vendors you may not have heard of or heard about in some time including PMC-Sierra (acquired Adaptec) and Vitesse among others. The reason I have Microsemi on this list is a combination of their acquisitions which might be an indicator of whom they pick up next. Another reason is that their components span data infrastructure topics from servers, storage, I/O and networking, PCIe and many more.

NVIDIA – GPU high performance compute and related compute offload technologies have been accessible for over a decade. More recently with new graphics and computational demands, GPU such as those NVIDIA are in need. Demand includes traditional graphics acceleration for physical and virtual, augmented and virtual reality, as well as cloud, along with compute-intensive analytics, AI, ML, DL along with other cognitive workloads.

NGDSystems (NGD) – Similar to what NVIDIA and other GPU vendors do for enabling compute offload for specific applications and workloads, NGD is working on a variation. That variation is to move offload compute capabilities for the server I/O storage-intensive workloads closer, in fact into storage system components such as SSDs and emerging SCMs and PMEMs. Unlike GPU based applications or workloads that tend to be more memory and compute intensive, NGD is positioned for applications that are the server I/O and storage intensive.

The premise of NGD is that they move the compute and application closer to where the data is, eliminating extra I/O, as well as reducing the amount of main server memory and compute cycles. If you are familiar with other server storage I/O offload engines and systems such as Oracle Exadata database appliance NGD is working at a tighter integration granularity. How it works is your application gets ported to run on the NGD storage platform which is SSD based and having a general-purpose processor. Your application is initiated from a host server, where it then runs on the NGD meaning I/Os are kept local to the storage system. Keep in mind that the best I/O is the one that you do not have to do, the second best is the one with the least resource or user impact.

Opvisor – Performance activity and capacity monitoring tools including for VMware environments.

Pavillon – Startup with an interesting NVMe based hardware appliance.

Quest – Having gained their independence as a free-standing company since divestiture from Dell Technologies (Dell had previously acquired Quest before EMC acquisition), Quest continues to make their data infrastructure related management tools available. Besides now being a standalone company again, keep an eye on Quest to see how they evolve their existing data protection and data infrastructure resource management tools portfolio via growth, acquisition, or, perhaps Quest will be on somebody else’s future growth list.

Retrospect – Far from being a startup, after gaining their independence from when EMC bought them several years ago, they have since continued to enhance their data protection technology. Disclosure, I have been a Retrospect customer since 2001 using it for on-site, as well as cloud data protection backups to the cloud.

Rubrik – Becoming more of a data infrastructure household name given their expanding technology portfolio and marketing efforts. More commonly known in smaller customer environments, as well as broadly within industry insider circles, Rubrik has potential with continued technology evolution to move further upmarket similar to how Commvault did back in the late 90s, just saying.

SkyScale – Cloud service provider that offers dedicated bare metal, as well as private, hybrid cloud instances along with GPU to support AI, ML, DL and other high performance,  compute workloads.

Snowflake – The name does not describe well what they do or who they are. However, they have an interesting cloud data warehouse (old school) large-scale data lakes (new school) technologies.

Strongbox – Not to be confused with technology such as those from Iosafe (e.g., waterproof, fireproof), Strongbox is a data protection storage solution for storing archives, backups, BC/BR/DR data, as well as cloud tiering. For those who are into buzzword bingo, think cloud tiering, object, cold storage among others. The technology evolved out of Crossroads and with David Cerf at the helm has branched out into a private company with keeping an eye on.

Storbyte – With longtime industry insider sales and marketing pro-Diamond Lauffin (formerly Nexsan) involved as Chief Evangelist, this is worth keeping an eye on and could be entertaining as well as exciting. In some ways it could be seen as a bit of Nexsan meets NVme meets NAND Flash meets cost-effective value storage dejavu play.

Talon – Enterprise storage and management solutions for file sharing across organizations, ROBO and cloud environments.

Ubitqui – Also known as UBNT is a data infrastructure networking vendor whose technologies span from WiFi access points (AP), high-performance antennas, routing, switching and related hardware, along with software solutions. UBNT is not as well-known in more larger environments as a Cisco or others. However, they are making a name for themselves moving from the edge to the core. That is, working from the edge with AP and routers, firewalls, gateways for the SMB, ROBO, SOHO as well as consumer (I have several of their APs, switches, routers and high-performance antennas along with management software), these technologies are also finding their way into larger environments. 

My first use of UBNT was several years ago when I needed to get an IP network connection to a remote building separated by several hundred yards of forest. The solution I found was to get a pair of UBNT NANO Apps, put them in secure bridge mode; now I have a high-performance WiFi service through a forest of trees. Since then have replaced an older Cisco router, several Cisco, and other APs, as well as the phased migration of switches.

UpdraftPlus– If you have a WordPress web or blog site, you should also have a UpdraftPlus plugin (go premium btw) for data protection. I have been using Updraft for several years on my various sites to backup and protect the MySQL databases and all other content. For those of you who are familiar with Spanning (e.g., was acquired by EMC then divested by Dell) and what they do for cloud applications, UpdraftPlus does similar for lower-end, smaller cloud-based applications.

Vexata – Startup scale out NVMe storage solution.

VMware – Expanding their cloud foundation from on-prem to in and on clouds including AWS among others. Data Infrastructure focus continues to expand from core to edge, server, storage, I/O, networking. With recent Dell Technologies and VMware declaring a dividend, should be interesting to see what lies ahead for both entities.

What About Those Not Mentioned?

By the way, if you were wondering about or why others are not in the above list, simple, check out last year’s list which includes Apcera, Blue Medora, Broadcom, Chelsio, Commvault, Compuverde, Datadog, Datrium, Docker, E8 Storage, Elastifile, Enmotus, Everspin, Excelero, Hedvig, Huawei, Intel, Kubernetes, Liqid, Maxta, Micron, Minio, NetApp, Neuvector, Noobaa, NVIDA, Pivot3, Pluribus Networks, Portwork, Rozo Systems, ScaleMP, Storpool, Stratoscale, SUSE Technology, Tidalscale, Turbonomic, Ubuntu, Veeam, Virtuozzo and WekaIO. Note that many of the above have expanded their capabilities in the past year and remain, or have become even more interesting to watch, while some might be on the future where are they now list sometime down the road. View additional vendors and service providers via our industry links and resources page here.

What About New, Emerging, Trending and Trendy Technologies

Bitcoin and Blockchain storage startups, some of which claim or would like to replace cloud storage taking on giants such as AWS S3 in the not so distant future have been popping up lately. Some of these have good and exciting stories if they can deliver on the hype along with the premise. A couple of names to drop include among others Filecoin, Maidsafe, Sia, Storj along with services from AWS, Azure, Google and a long list of others.

Besides Blockchain distributed ledgers, other technologies and trends to keep an eye on include compute processes from ARM to SoC, GPU, FPGA, ASIC for offload and specialized processing. GPU, ASIC, and FPGA are appearing in new deployments across cloud providers as they look to offload processing from their general servers to derive total effective productivity out of them. In other words, innovating by offloading to boost their effective return on investment (old ROI), as well as increase their return on innovation (the new ROI).

Other data infrastructure server I/O which also ties into storage and network trends to watch include Gen-Z that some may claim as the successor to PCIe, Ethernet, InfiniBand among others (hint, get ready for a new round of “something is dead” hype). Near-term the objective of Gen-Z is to coexist, complement PCIe, Ethernet, CPU to memory interconnect, while enabling more granular allocation of data infrastructure resources (e.g., composability). Besides watching who is part of the Gen-Z movement, keep an eye on who is not part of it yet, specifically Intel.

NVMe and its many variations from a server internal to networked NVMe over Fabrics (NVMeoF) along with its derivatives continue to gain both industry adoption, as well as customer deployment. There are some early NVMeoF based server storage deployments (along with marketing dollars). However, the server side NVMe customer adoption is where the dollars are moving to the vendors. In other words, it’s still early in the bigger broader NVMe and NVMeoF game.

Where to learn more

Learn more about data infrastructures and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Let’s see how those mentioned last year as well as this year, along with some new and emerging vendors, service providers who did not get said end up next year, as well as the years after that.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Different timelines of adoption and deployment for various audiences

Keep in mind that there is a difference between industry adoption and customer deployment, granted they are related. Likewise let’s see who will be at the top in three, five and ten years, which means some of the current top or favorite vendors may or may not be on the list, same with some of the established vendors. Meanwhile, check out the 2018 Hot Popular New Trending Data Infrastructure Vendors to Watch.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash.

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash
Image via Dell Technologies

Summary of Dell transaction announcement includes:

  • VMware declares an $11 Billion USD cash dividend pro rata to all VMware stock holders.
  • Given ownership percentage of VMware, Dell Technologies will receive approximately $9 Billion USD cash dividend.
  • Dell plans to list its Class C common stock shares on the New York Stock Exchange (NYSE).
  • Dell plans to use the VMware dividend proceeds to fund cash consideration to be paid to Class V (tracking stock) shareholders.
  • For each Class V share (e.g. VMware tracking stock) shareholders can choose to receive:

    1.3665 shares of Dell Technologies Class C common stock, or
    $109 in cash per DVMT (Class V share) a 29% premium per share

Dell Announces Class V VMware Tracking Stock exchange for stock or cash
Image via Dell Technologies

Additional interest points of this transaction include:

  • Transaction expected to close Q4 CY2018, subject to Class V shareholder approval.
  • VMware maintains its independence as a separate publicly traded company.
  • Dell Technologies maintains its 81% ownership of VMware common stock
    Dell Technologies Class V (DVMT) shareholders will own 20.8% to 31.0% of Dell Class C (depending on cash election amounts).
  • Streamline Dell capital and ownership structure.
  • Establishes a public security (stock) in global end to end data infrastructure provider (e.g. Dell Technologies Stock on NYSE).
  • Enables financial flexibility for future strategic initiatives

Dell Announces Class V VMware Tracking Stock exchange for stock or cash
Image via Dell Technologies

Michael Dell and Silver Lake Continued Ownership

As part of this transaction, both Michael Dell and Silver Lake partners announce commitment to Dell Technologies. Michael Dell will continue to serve as Chairman and CEO, along with a committed stockholder beneficially owning between about 47% to 54% of Dell Technologies on a fully diluted basis. Silver Lake equity partners, an investor in Dell will continue its long-term partnership with Michael Dell beneficially owning between about 16%-18% of Dell Technologies on a fully diluted basis.

Where to learn more

Learn more about Dell Technologies, VMware, Data Infrastructures and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

This announcement enables Dell to streamline its financial structure, while providing VMware shareholder with a dividend value. In addition, this Dell Technologies announcement puts to rest industry discussions of what will Michael Dell along with Dell Technologies and VMware do in the future. Speaking of the future, this transaction could also page the wave for future investment or acquisitions by Dell and/or VMware. Now the question is if you are a DVMT tracking stock shareholder, do you take the $109 USD cash, or, new Class C Dell Technologies stock? Now lets see how Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash plays out during the rest of summer and into the fall.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

June 2018 Server StorageIO Data Infrastructure Update Newsletter

June 2018 Server StorageIO Data Infrastructure Update Newsletter

June 2018 Server StorageIO Data Infrastructure Update Newsletter

Volume 18, Issue 6 (June 2018)

Hello and welcome to the June 2018 Server StorageIO Data Infrastructure Update Newsletter.

In cased you missed it, the May 2018 Server StorageIO Data Infrastructure Update Newsletter can be viewed here (HTML and PDF).

In this issue buzzwords topics include AI, All Flash, HPC, Lustre, Multi Cloud, NVMe, NVMeoF, SAS, and SSD among others:

Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

June data infrastructure, server, storage, I/O network, hardware, software, cloud, converged, and container as well as data protection industry activity includes among others:

Check out what’s new at Amazon Web Services (AWS) here, as well as Microsoft Azure here, Google Cloud Compute here, IBM Softlayer here, and OVH here. CTERA announced new cloud storage gateways (HC Series) for enterprise environments that include all flash SSD options, capacity up to 96TB (raw), Petabyte scale tiering to public and private cloud, 10 Gbe Ethernet connectivity, virtual machine deployment, along with high availability configuration.

Cray announced enhancements to its Lustre (parallel file system) based ClusterStor storage system for high performance compute (HPC) along with it previously acquired from Seagate (Who had acquired it as part of the Xyratex acquisition). New enhancements for ClusterStor include all flash SSD solution that will integrate and work with our existing hard disk drive (HDD) based systems.

In related Lustre based activity, DataDirect Network (DDN) has acquired from Intel, their Lustre File system capability. Intel acquired its Lustre capabilities via its purchase of Whamcloud back in 2012, and in 2017 announced that it was getting out of the Lustre business (here and here). DDN also announced new storage solutions for enabling HPC environment workloads along with Artificial Intelligence (AI) centric applications.

HPE which held its Discover event announced a $4 Billion USD investment over four years pertaining to development of edge technologies and services including software defined WAN (SD-WAN) and security among others.

Microsoft held its first virtual Windows Server Summit in June that outlined current and future plans for the operating system along with its hybrid cloud future.

Penguin computing has announced the Accelion solution for accessing geographically dispersed data enabling faster file transfer or other data movement functions.

SwiftStack has added multi cloud features (enhanced search, universal access, policy management, automation, data migration) and making them available via 1space open source project. 1space enables a single object namespace across different object storage locations including integration with OpenStack Swift.

Vexata announced a new version of its Vexata operating system (VX-OS) for its storage solution including NVMe over Fabric (NVMe-oF) support.

Speaking of NVMe and fabrics, the Fibre Channel Industry Association (FCIA) announced that the International Committee on Information Technology Standards (INCITS) has published T11 technical committee developed  Fibre Channel over NVMe (FC-NVMe) standard.

NVMe frontend NVMeoF
Various NVMe front-end including NVMeoF along with NVMe back-end devices (U.2, M.2, AiC)

Keep in mind that there are many different facets to NVMe including direct attached (M.2, U.2/8639, PCIe AiC) along with fabrics. Likewise, there are various fabric options for the NVMe protocol including over Fibre Channel (FC-NVMe), along with other NVMe over Fabrics including RDMA over Converged Ethernet (RoCE) as well as IP based among others. NVMe can be used as a front-end on storage systems supporting server attachment (e.g. competes with Fibre Channel, iSCSI, SAS among others).

Another variation of NVMe is as a back-end for attachment of drives or other NVMe based devices in storage systems, as well as servers. There is also end to end NVMe (e.g. both front-end and back-end) options. Keep context in mind when you hear or talk about NVMe and in particular, NVMe over fabrics, learn more about NVMe at https://storageioblog.com/nvme-place-volatile-memory-express/.

Toshiba announced new RM5 series of high capacity SAS SSDs for replacement of SATA devices in servers. The RM5 series being added to the Toshiba portfolio combine capacity and economics traditional associated with SATA SSDs along with performance as well as connectivity of SAS.

Check out other industry news, comments, trends perspectives here.

Data Infrastructure Server StorageIO Comments Content

Server StorageIO Commentary in the news, tips and articles

Recent Server StorageIO industry trends perspectives commentary in the news.

Via SearchStorage: Comments The storage administrator skills you need to keep up today
Via SearchStorage: Comments Managing storage for IoT data at the enterprise edge
Via SearchCloudComputing: Comments Hybrid cloud deployment demands a change in security mindset

View more Server, Storage and I/O trends and perspectives comments here.

Data Infrastructure Server StorageIOblog posts

Server StorageIOblog Data Infrastructure Posts

Recent and popular Server StorageIOblog posts include:

Announcing Windows Server Summit Virtual Online Event
May 2018 Server StorageIO Data Infrastructure Update Newsletter
Solving Application Server Storage I/O Performance Bottlenecks Webinar
Have you heard about the new CLOUD Act data regulation?
Data Protection Recovery Life Post World Backup Day Pre GDPR
Microsoft Windows Server 2019 Insiders Preview
Which Enterprise HDD for Content Server Platform
Server Storage I/O Benchmark Performance Resource Tools
Introducing Windows Subsystem for Linux WSL Overview
Data Infrastructure Primer Overview (Its Whats Inside The Data Center)
If NVMe is the answer, what are the questions?

View other recent as well as past StorageIOblog posts here

Server StorageIO Recommended Reading (Watching and Listening) List

Software-Defined Data Infrastructure Essentials SDDI SDDC

In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017) available at Amazon.com (check out special sale price), the following are Server StorageIO data infrastructure recommended reading, watching and listening list items. The Server StorageIO data infrastructure recommended reading list includes various IT, Data Infrastructure and related topics including Intel Recommended Reading List (IRRL) for developers is a good resource to check out. Speaking of my books, Didier Van Hoye (@WorkingHardInIt) has a good review over on his site you can view here, also check out the rest of his great content while there.

Watch for more items to be added to the recommended reading list book shelf soon.

Data Infrastructure Server StorageIO event activities

Events and Activities

Recent and upcoming event activities.

July 25, 2018 – Webinar – Data Protect & Storage

June 27, 2018 – Webinar – App Server Performance

June 26, 2018 – Webinar – Cloud App Optimize

May 29, 2018 – Webinar – Microsoft Windows as a Service

April 24, 2018 – Webinar – AWS and on-site, on-premises hybrid data protection

See more webinars and activities on the Server StorageIO Events page here.

Data Infrastructure Server StorageIO Industry Resources and Links

Various useful links and resources:

Data Infrastructure Recommend Reading and watching list
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/downloads – Various presentations and other download material
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

What this all means and wrap-up

Data Infrastructures are what exists inside physical data centers as well as spanning cloud, converged, hyper-converged, virtual, serverless and other software defined as well as legacy environments. NVMe continues to gain in industry adoption as well as customer deployment. Cloud adoption also continues along with multi-cloud deployments. Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter and watch for more NVMe,cloud, data protection among other topics in future posts, articles, events, and newsletters.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Solving Application Server Storage I/O Performance Bottlenecks Webinar

Solving Application Server Storage I/O Performance Bottlenecks Webinar

Performance Bottlenecks Webinar

Solving Application Server Storage I/O Performance Bottlenecks Webinar

The best I/O is the one you do not have to do, the second best is the one with least server I/O and storage overhead along with application performance bottleneck impact.

Fast applications need fast servers, storage, I/O networking hardware, and software. Merely throwing more hardware as a cache at application performance bottlenecks can help. However, throwing more hardware at performance problems can also cost much cash. On the other hand, a little fast memory and storage in the right place, with robust software performance acceleration can have significant application productivity benefits. Fast hardware also needs fast software to help boost application and user productivity.

As application workloads activity increases, by implementing server software, performance acceleration along with additional fast memory and storage including flash, Storage Class Memories (SCM) among other SSD along with NVMe accessed devices, even more, work can be done boosting productivity while reducing cost.

Join me on June 27 at 1 PM Pacific Time (PT) when I host a free webinar (registration required) sponsored by DataCore and produced by Redmond Magazine/1105 Media as we discuss Solving Application Server Storage I/O Performance Bottlenecks including what you can do today.

I will be joined by guest presenters Augie Gonzalez, Director Technical Product Marketing and Tim Warden, Director Engineering Product Management both from DataCore. During the interactive webinar discussions, we invite you to participate with your questions, as we look at issues, challenges, various approaches, and what you can do today to boost different application performance and productivity.

This webinar is for those whose applications have the need for speed including database, VDI, SharePoint, Exchange, AI, ML and other I/O intensive workloads. Topics that we will be discussing in addition to your questions include:

  • Boosting application performance without breaking the bank
  • Improving application productivity and reducing user wait time
  • Gaining insight and awareness into bottlenecks and what to do
  • Unlocking value in your existing hardware and software licenses
  • What you can do today, literally right after or even during this webinar

Where to learn more

Learn more about Windows Server Summit and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

The best I/O is the one you do not have to do, the second best is the one that has the least impact on your applications, while boosting user productivity. There are many differ net approaches to addressing various server storage I/O performance bottlenecks across applications. Join me on June 27, 2018 at 1 PM PT for the free webinar Solving Application Server Storage I/O Performance Bottlenecks Webinar and learn what you can do today to boost your uses productivity.

 

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Announcing Windows Server Summit Virtual Online Event

Announcing Windows Server Summit Virtual Online Event

Dell Technology World 2018 Announcement Summary

Announcing Windows Server Summit Virtual Online Event

Microsoft will be hosting a free (no registration required) half day virtual (e.g. online) Windows Server Summit Virtual Online Event June 26, 2018 starting at 9AM PT. As part of its continued focus on supporting hybrid strategy spanning on-premises Windows Server to Azure (among others including AWS) cloud based, Microsoft is preparing for the launch later this year of Windows Server 2019.

There is no registration required, you can just show up without concern of getting email or other spam, however you can also click here to save the date, as well as here to get updates on the event.

Microsoft Windows Server LTSC and SAC release

Windows Server 2019 is now in insider preview (get it here) and is the next Long Term Service Channel (LTSC) release following Windows Server 2016. In the past, Microsoft would have called Windows Server 2019 something such as Windows Server 2016 R2, however that has changed with the new Semiannual Channel (SAC) and LTSC release cycles.

Keynote kick off presentations will be from Erin Chapple, Director of Program Management, Cloud + AI (which includes Windows Kernel, Hypervisors, Containers and Storage), Arpan Shah, General Manager of Azure Infrastructure marketing (Windows Server, Azure IaaS, Azure Stack, Azure Management and Security), and, Jeff Woosley Principal PM, Windows Server. In addition to the kick off presentations with current state and status of Windows Servers available for on-premises bare metal, virtual, container as well as cloud, there will be demos, Q&A, roadmap’s and much more. Topics will include new and recent functionalities such as Windows Server 2019, Windows Admin Center (formerly known as Honolulu), IoT, roadmap’s and much more.

Windows Server Summit HybridWindows Server Summit SecurityWindows Server Summit HCIWindows Server Summit Application Development
Images Via Microsoft Windows Server Summit Page

Windows Server Summit Break Out Tracks

During the Windows Server Summit, there will be four technology focused tracks including:

  • Hybrid – From on-premisess to Azure, how Windows Server supports different workloads in various configurations, along with associated management tools (including Windows Admin Center aka Honolulu)
  • Security – New and recent security enhancements for Windows Server along with Hyper-V and other related topics.
  • Application Platform – Containers and Linux support along with associated management tools for on-premisess and Azure.
  • Hyper-converged infrastructure (HCI) – Leveraging software defined storage (SDS) with Storage Spaces Direct (S2D) in Windows Server 2016, along with Hyper-V and other technologies, learn how Microsoft supports HCI and beyond.

Where to learn more

Learn more about Windows Server Summit and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Windows Server remains relevant today for traditional, on site, on-premises, as well as on-premisess along with cloud, container among other deployments. Remember to click here to save the date, click here to sign up for Windows Server Summit updates and learn more about the Windows Server Summit Virtual Online event here, see there, or at least virtually.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

May 2018 Server StorageIO Data Infrastructure Update Newsletter

May 2018 Server StorageIO Data Infrastructure Update Newsletter

May 2018 Server StorageIO Data Infrastructure Update Newsletter

Volume 18, Issue 5 (May 2018)

Hello and welcome to the May 2018 Server StorageIO Data Infrastructure Update Newsletter.

In cased you missed it, the April 2018 Server StorageIO Data Infrastructure Update Newsletter can be viewed here (HTML and PDF).

May has been a busy month with a lot of data infrastructure related activity from software-defined virtual, cloud, container, converged, serverless to legacy, hardware, software, services, server, storage, I/O and networking along with data protection topics among others.

In this issue buzzwords topics include GDPR, NVMe, NVMeoF, Composable, Serverless, Data Protection, SCM, Gen-Z, MaaS:

Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

May has been a busy month, some data infrastructure, server, storage, I/O network, hardware, software, cloud, converged, and container as well as data protection activity includes among others:

Depending on when you read this, the new global data protection regulations (GDPR) are either days away, or already in effect. For those who are not aware of GDPR other than seeing many inbox items in your email pertaining to it, here are some resources as a refresher or primer:

May Buzzword, Buzz Topic and Trends

Besides data protection and GDPR, other recent data infrastructure related news, trends, technologies and topics to keep an eye on (besides AI, ML, DL, AR/VR, IoT, Blockchain, Serverless) include Metal as a Service (MaaS) that might be familiar to some, for others, something new. Canonical has been busy for sometime now with MaaS including in Ubuntu and they are not alone with variations appearing with various managed service providers, hosting and cloud providers as well. NVMe has become a more common topic, technology, trend including for use in servers as well as over fabrics (e.g. NVMe over Fabrics) as a language for server, storage, I/O communication.

A new emerging companion to NVMe is Gen-Z which initially is a companion to PCIe. Longer term, Gen-Z could maybe possibly be a replacement, as well as for use accessing direct random access memory (DRAM) among other uses. Storage Class Memory (SCM) has been an industry conversation topic for several years now with new persistent memories (PMEM) that combine the best of traditional DRAM (Speed and write endurance) as well as persistent, higher capacity, lower cost of traditional NAND flash SSDs.

Another trend topic is that for some, ASIC, FPGA and GPU are new companions to standard commodity compute processors along with servers, yet for others it may be Dejavu as they have been being used for years (ok, decades) in some solutions. For now, two other buzzwords, buzz terms to add or refresh your data infrastructure vocabulary include distributed ledgers (aka blockchains), composable resources and ephemeral instance storage (storage on a cloud instance).

May NVMe Momentum Movement Activity

May saw a lot of NVMe related activity, from chips and components (adapters, devices) to systems spanning direct attached to NVMe over Fabric (NVMeoF). Here is a primer (or refresh) for NVMe along with various deployment options. NVMeoF includes RDMA over Converged Ethernet (RoCE) based, along with NVMe over Fibre Channel (FC-NVMe), as well as emerging NVMe over IP.

NVMe options
NVMe being used for front-end accessed via shared PCIe along with back-end devices

There are many different facets of NVMe including for use as a front-end on storage systems supporting server attachment (e.g. competes with Fibre Channel, iSCSI, SAS among others). Another variation of NVMe is as a back-end for attachment of drives or other NVMe based devices in storage systems, as well as servers.

NVMe backend
Front-end using traditional block SAN access with back-end NVMe, SAS and SATA devices

Read more about the many different options and variations of NVMe including key questions to ask or understand, deployment topology along with other related topics at thenvmeplace.com.

NVMe frontend NVMeoF
Various NVMe front-end including NVMeoF along with NVMe back-end devices (U.2, M.2, AiC)

Software Defined Data Infrastructure Activity

Amazon Web Services (AWS) continues to add new features, functionality as well as extending those as along with existing capabilities into various regions. Some recent updates include new Elastic Cloud Compute (EC2) Microsoft Windows Servers versions 1709 and 1803 Amazon Machine Images (AMIs). Other AWS updates include spot instances support for Red Hat BYOL (Bring Your Own License), VPN enhancements, X1e instances available in Frankfurt, H1 instance price reduction, as well as LightSail now in Canada, Paris, and Seoul regions.

For those who are not familiar with LightSail, they are virtual private servers (VPS) which are different from traditional EC2 instances. LightSail can be a cost-effective way for those who need to move out of general population shared hosting, yet cannot justify a full EC2 instance while requiring more than a container.

The LightSail instance also is available with various software pre-installed such as for WordPress websites among others. For example, I have used LightSail as a backup and standby WordPress site for StorageIOblog using Updraft Plus  Pro for data protection.

In other news, AWS C5d EC2 instances are available in various regions. C5d instances are available with 2, 4, 8, 16, 36 and 72 vCPUs along with up to 1800GB of NVMe based ephemeral storage for on-demand reserved or spot instances.

Note that instance-based storage is temporary meaning that it persists for the life of the instance. What this means is that if you stop and restart the instance, the data is not persistence. Instance-based storage is useful for data that can be protected or persisted to other storage including EBS (Elastic Block Storage). Usage includes batch, log and analytics processing, burst buffers, cache or workspace.

AWS also announced a new Simple Storage Service (S3) storage class a month or so ago called One Zone Availability Infrequent Access. This new storage class primarily provides a lower cost of storage with lower durability (e.g., data spread across one zone vs. multiple). Over the past couple of months, I have been migrating from S3 Infrequent Access (IA) as well as standard into One Zone Availability. Some of my active data remains in S3 Standard storage class, while cold archives are in Glacier.

A tip about migrating to One Zone Availability, as well as between other S3 storage classes is paid attention to your API calls and monthly budget. You might see an increase in S3 costs during the migration time, that then settles into the lower prices once data has been moved due to API calls (gets, puts, lists, dir). In other words, pay attention to how many API calls you are allowed per storage class per month, along with other fees beyond focusing only on cost per TByte. Read about other recent AWS news updates here.

Software-defined storage startup Cloudian announced their technology available for test drive on Google Cloud Platform as part of a continued industry trend. That trend is for storage vendors to make their storage software technology available on different cloud platforms such as AWS, Azure, Google, Softlayer among others.

Dell Technology World 2018

Dell Technologies made several announcements as part of Dell Technologies World that are covered in a series of posts here. Announcements included PowerMax the successor to VMAX, XtremIO X2 updates, new servers, workstations among many other items, read more here.

Besides the data infrastructure, cloud service providers and systems vendors, component suppliers including Cavium announced NVMe over Fibre Channel updates (here and here), along with Marvel NVMe updates here. HPE announced new thin clients and software (t430 Thin Client, HP mt44 Mobile Thin Client, HP ThinPro software), as well as updates to 3PAR and other storage solutions.

IBM announced various storage enhancements (and here) as well as a Happy 30th anniversary to the IBM Power9 based i systems. In other news, Kaseya bought backup data protection vendor Unitrends.

NVMe NAND flash Intel Optane

Micron announced the first quad layer cell (QLC) nand flash solid state device (SSD) named 52100 has begun shipping to select customers (and vendors). QLC packs or stacks 4 bits per cell. The 5200 is optimized for read-intensive workloads with up to 33% higher densities compared to previous generation TLC (triple layer cell) NAND flash. Broader market availability is expected to occur later fall 2018, 5210 form factor is 2.5” as a standard SSD or HDD, with capacities from 1.92TB to 7.68TB.

In other news, Micron also announced a $10 Billion (USD) stock repurchase plan, along with an extension of Intel 3D NAND flash memory partnership involving 3D NAND flash, as well as 96 layer 3D NAND. Meanwhile, various vendors are increasingly talking about how their systems are or will be storage class memory (SCM) ready including for use such as Micron 3D XPoint also known as Intel Optane among others.

Microsoft has placed into public preview Azure Active Directory (AAD) Storage authentication for Azure Blobs and Queues. Azure Storage Explorer is now released as version 1.0. AAD storage authentication enables organizations to implement role-based access control of Azure storage resources. Speaking of Azure, Microsoft has published several architectures, reference and other content at the Azure Virtual Datacenter portal here.

If you have not done so, check out Azure File Sync which is currently in public preview. Having been involved and using it for over a year including during private preview, Azure File Sync is an exciting, useful technology for creating a hybrid distributed file sharing with cloud tiering solutions. Learn more Azure File Sync here and here. In other news, Microsoft has announced a preview as part of the April 2018 Windows 10 build for a Hyper-V Google Android emulator support.

NetApp has had Azure based NAS storage in preview for a while now, and also announced Cloud Volumes on Google Cloud Platform (GCP). In addition to Cloud Volumes on AWS, Azure, and GCP, NetApp also announced enhanced NVMe based storage systems among other updates.

Two companies that have similar names are Opendrives (video workflow acceleration) and Opendrive (cloud storage, backup, and data protection). Meanwhile, data infrastructure startup Pavilion has received new funding as well as begun talking about their NVMe including NVMe over Fabric (NVMeOF) hardware storage system. Long-time data infrastructure converged server storage startup Pivot3 announced additional cloud workload mobility.

Pure storage made a couple of announcements including  FlashArray//X NVMe based shared accelerated storage system as well as NVIDIA (GPU powered) based AIRI Mini for AI/DL/ML.

Have you heard about Snowflake computing, aka, the cloud data warehouse solution? If not, check them out here. Another cloud-related data infrastructure vendor to look into is Upbound.io who have received additional funding for their multi-cloud management solutions.

Building off of recent VMware vSphere updates (here), and Dell Technology World here, the following is an excellent post about Instant Clone in vSphere 6.7, and VMware vSAN HCI assessment tool here.

Check out other industry news, comments, trends perspectives here.

Data Infrastructure Server StorageIO Comments Content

Server StorageIO Commentary in the news, tips and articles

Recent Server StorageIO industry trends perspectives commentary in the news.

Via SearchStorage: Comments Managing storage for IoT data at the enterprise edge
Via SearchCloudComputing: Comments Hybrid cloud deployment demands a change in security mindset
Via SearchStorage: Comments Dell EMC storage IPO, VMware merger plans still unclear
Via SearchStorage: Comments Dell EMC midrange storage keeps its overlapping arrays
Via SearchStorage: Comments Dell EMC all-flash PowerMax replaces VMAX, injects NVMe
Via IronMountain InfoGoto:  The growing Trend of Secondary Data Storage

View more Server, Storage and I/O trends and perspectives comments here.

Data Infrastructure Server StorageIOblog posts

Server StorageIOblog Data Infrastructure Posts

Recent and popular Server StorageIOblog posts include:

Dell Technology World 2018 Announcement Summary
Part II Dell Technology World 2018 Modern Data Center Announcement Details
Part III Dell Technology World 2018 Storage Announcement Details
Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure
Part V Dell Technology World 2018 Server Converged Announcement Details
April 2018 Server StorageIO Data Infrastructure Update Newsletter
VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary
PCIe Fundamentals Server Storage I/O Network Essentials
Have you heard about the new CLOUD Act data regulation?
Data Protection Recovery Life Post World Backup Day Pre GDPR
Microsoft Windows Server 2019 Insiders Preview
Application Data Value Characteristics Everything Is Not The Same
Data Infrastructure Resource Links cloud data protection tradecraft trends
IT transformation Serverless Life Beyond DevOps Podcast
Data Protection Diaries Fundamental Topics Tools Techniques Technologies Tips
Introducing Windows Subsystem for Linux WSL Overview
Data Infrastructure Primer Overview (Its Whats Inside The Data Center)
If NVMe is the answer, what are the questions?

View other recent as well as past StorageIOblog posts here

Server StorageIO Recommended Reading (Watching and Listening) List

Software-Defined Data Infrastructure Essentials SDDI SDDC

In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017) available at Amazon.com (check out special sale price), the following are Server StorageIO data infrastructure recommended reading, watching and listening list items. The Server StorageIO data infrastructure recommended reading list includes various IT, Data Infrastructure and related topics including Intel Recommended Reading List (IRRL) for developers is a good resource to check out. Speaking of my books, Didier Van Hoye (@WorkingHardInIt) has a good review over on his site you can view here, also check out the rest of his great content while there.

Containers, serverless, kubernetes continue to gain in industry adoption, as well as customer deployments. Here is some information about Microsoft Azure Kubernetes Service (AKS). Note that AWS has Elastic Kubernetes Service (EKS), Google, VMware and Pivotal with Pivotal Kubernetes Service (PKS) among others.

Here is an interesting perspective by Ben Kepps about Serverless (e.g. life beyond Kubernetes and containers (e.g. life beyond virtualization which to some is or was life (e.g. life beyond bare metal))) as well as the all to often punditry, evangelism of something new causing something else to be dead.

SNIA has updated their Emerald aka Green energy effectiveness (focus on productivity) measurement specification (V3.01) including NAS NFS file activity (besides block). Learn more at snia.org/forums/green.

Watch for more items to be added to the recommended reading list book shelf soon.

Data Infrastructure Server StorageIO event activities

Events and Activities

Recent and upcoming event activities.

June 27, 2018 – Webinar – TBA

May 29, 2018 – Webinar – Microsoft Windows as a Service

April 24, 2018 – Webinar – AWS and on-site, on-premises hybrid data protection

See more webinars and activities on the Server StorageIO Events page here.

Data Infrastructure Server StorageIO Industry Resources and Links

Various useful links and resources:

Data Infrastructure Recommend Reading and watching list
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/downloads – Various presentations and other download material
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

Connect and Converse With Us

Storage IO RSS storageio linkedin storageio facebook Server StorageIO on twitter @StorageIO   Google+  Server StorageIO email storageio youtube  storageio instagram

Subscribe to Newsletter – Newsletter Archives StorageIO.comStorageIOblog.com

What this all means and wrap-up

Data Infrastructures are what exists inside physical data centers spanning cloud, converged, hyper-converged, virtual, serverless and other software defined as well as legacy environments. So far this spring there has been a lot of data infrastructure related activity, from new technology announcements, to events, trends among others. Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter and watch for more NVMe, Gen-Z, cloud, data protection among other topics in future posts, articles, events, and newsletters.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Windows Server 2019 Insiders Preview

Microsoft Windows Server 2019 Insiders Preview

Application Data Value Characteristics Everything Is Not The Same

Microsoft Windows Server 2019 Insiders Preview has been announced. Windows Server 2019 in the past might have been named 2016 R2 also known as a Long-Term Servicing Channel (LTSC) release. Microsoft recommends LTSC Windows Server for workloads such as Microsoft SQL Server, Share Point and SDDC. The focus of Microsoft Windows Server 2019 Insiders Preview is around hybrid cloud, security, application development as well as deployment including containers, software defined data center (SDDC) and software defined data infrastructure, as well as converged along with hyper-converged infrasture (HCI) management.

Windows Server 2019 Preview Features

Features and enhancements in the Microsoft Windows Server 2019 Insiders Preview span HCI management, security, hybrid cloud among others.

  • Hybrid cloud – Extending active directory, file server synchronize, cloud backup, applications spanning on-premises and cloud, management).
  • Security – Protect, detect and respond including shielded VMs, attested guarded fabric of host guarded machines, Windows and Linux VM (shielded), VMConnect for Windows and Linux troubleshooting of Shielded VM and encrypted networks, Windows Defender Advanced Threat Protection (ATP) among other enhancements.
  • Application platform – Developer and deployment tools for Windows Server containers and Windows Subsystem on Linux (WSL). Note that Microsoft has also been reducing the size of the Server image while extending feature functionality. The smaller images take up less storage space, plus load faster. As part of continued serverless and container support (Windows and Linux along with Docker), there are options for deployment orchestration including Kubernetes (in beta). Other enhancements include extending previous support for Windows Subsystem for Linux (WSL).

Other enhancements part of Microsoft Windows Server 2019 Insiders Preview include cluster sets in support of software defined data center (SDDC). Cluster sets expand SDDC clusters of loosely coupled grouping of multiple failover clusters including compute, storage as well as hyper-converged configurations. Virtual machines have fluidity across member clusters within a cluster set and unified storage namespace. Existing failover cluster management experiences is preserved for member clusters, along with a new cluster set instance of the aggregate resources.

Management enhancements include S2D software defined storage performance history, project Honolulu support for storage updates, along with powershell cmdlet updates, as well as system center 2019. Learn more about project Honolulu hybrid management here and here.

Microsoft and Windows LTSC and SAC

As a refresher, Microsoft Windows (along with other software) is now being released on two paths including more frequent semi-annual channel (SAC), and less frequent LTSC releases. Some other things to keep in mind that SAC are focused around server core and nano server as container image while LTSC includes server with desktop experience as well as server core. For example, Windows Server 2016 released fall of 2016 is an LTSC, while the 1709 release was a SAC which had specific enhancements for container related environments.

There was some confusion fall of 2017 when 1709 was released as it was optimized for container and serverless environments and thus lacked storage spaces direct (S2D) leading some to speculate S2D was dead. S2D among other items that were not in the 1709 SAC are very much alive and enhanced in the LTSC preview for Windows Server 2019. Learn more about Microsoft LTSC and SAC here.

Test Driving Installing The Bits

One of the enhancements with LTSC preview candidate server 2019 is improved upgrades of existing environments. Granted not everybody will choose the upgrade in place keeping existing files however some may find the capability useful. I chose to give the upgrade keeping current files in place as an option to see how it worked. To do the upgrade I used a clean and up to date Windows Server 2016 data center edition with desktop. This test system is a VMware ESXi 6.5 guest running on flash SSD storage. Before the upgrade to Windows Server 2019, I made a VMware vSphere snapshot so I could quickly and easily restore the system to a good state should something not work.

To get the bits, go to Windows Insiders Preview Downloads (you will need to register)

Windows Server 2019 LTSC build 17623 is available in 18 languages in an ISO format and require a key.

The keys for the pre-release unlimited activations are:
Datacenter Edition         6XBNX-4JQGW-QX6QG-74P76-72V67
Standard Edition             MFY9F-XBN2F-TYFMP-CCV49-RMYVH

First step is downloading the bits from the Windows insiders preview page including select language for the image to use.

Getting the windows server 2019 preview bits
Select the language for the image to download

windows server 2019 select language

Starting the download

Once you have the image download, apply it to your bare metal server or hypervisors guest. In this example, I copied the windows server 2019 image to a VMware ESXi server for a Windows Server 2016 guest machine to access via its virtual CD/DVD.

pre upgrade check windows server version
Verify the Windows Server version before upgrade

After download, access the image, in this case, I attached the image to the virtual machine CD, then accessed it and ran the setup application.

Microsoft Windows Server 2019 Insiders Preview download

Download updates now or later

license key

Entering license key for pre-release windows server 2019

Microsoft Windows Server 2019 Insiders Preview datacenter desktop version

Selecting Windows Server Datacenter with Desktop

Microsoft Windows Server 2019 Insiders Preview license

Accepting Software License for pre-release version.

Next up is determining to do a new install (keep nothing), or an in-place upgrade. I wanted to see how smooth the in-place upgrade was so selected that option.

Microsoft Windows Server 2019 Insiders Preview inplace upgrade

What to keep, nothing, or existing files and data


Confirming your selections

Microsoft Windows Server 2019 Insiders Preview install start

Ready to start the installation process

Microsoft Windows Server 2019 Insiders Preview upgrade in progress
Installation underway of Windows Server 2019 preview

Once the installation is complete, verify that Windows Server 2019 is now installed.

Microsoft Windows Server 2019 Insiders Preview upgrade completed
Completed upgrade from Windows Server 2016 to Microsoft Windows Server 2019 Insiders Preview

The above shows verifying the system build using Powershell, as well as the message in the lower right corner of the display. Granted the above does not show the new functionality, however you should get an idea of how quickly a Windows Server 2019 preview can be deployed to explore and try out the new features.

Where to learn more

Learn more Microsoft Windows Server 2019 Insiders Preview, Windows Server Storage Spaces Direct (S2D), Azure and related software defined data center (SDDC), software defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Microsoft Windows Server 2019 Insiders Preview gives a glimpse of some of the new features that are part of the next evolution of Windows Server as part of supporting hybrid IT environments. In addition to the new features and functionality that convey not only support for hybrid cloud, also hybrid applications development, deployment, devops and workloads, Microsoft is showing flexibility in management, ease of use, scalability, along with security as well as scale out stability. If you have not looked at Windows Server for a while, or involved with serverless, containers, Kubernetes among other initiatives, now is a good time to check out Microsoft Windows Server 2019 Insiders Preview.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Value Characteristics Everything Is Not The Same (Part I)

Application Data Value Characteristics Everything Is Not The Same

Application Data Value Characteristics Everything Is Not The Same

Application Data Value Characteristics Everything Is Not The Same

This is part one of a five-part mini-series looking at Application Data Value Characteristics Everything Is Not The Same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we start things off by looking at general application server storage I/O characteristics that have an impact on data value as well as access.

Application Data Value Software Defined Data Infrastructure Essentials Book SDDC

Everything is not the same across different organizations including Information Technology (IT) data centers, data infrastructures along with the applications as well as data they support. For example, there is so-called big data that can be many small files, objects, blobs or data and bit streams representing telemetry, click stream analytics, logs among other information.

Keep in mind that applications impact how data is accessed, used, processed, moved and stored. What this means is that a focus on data value, access patterns, along with other related topics need to also consider application performance, availability, capacity, economic (PACE) attributes.

If everything is not the same, why is so much data along with many applications treated the same from a PACE perspective?

Data Infrastructure resources including servers, storage, networks might be cheap or inexpensive, however, there is a cost to managing them along with data.

Managing includes data protection (backup, restore, BC, DR, HA, security) along with other activities. Likewise, there is a cost to the software along with cloud services among others. By understanding how applications use and interact with data, smarter, more informed data management decisions can be made.

IT Applications and Data Infrastructure Layers
IT Applications and Data Infrastructure Layers

Keep in mind that everything is not the same across various organizations, data centers, data infrastructures, data and the applications that use them. Also keep in mind that programs (e.g. applications) = algorithms (code) + data structures (how data defined and organized, structured or unstructured).

There are traditional applications, along with those tied to Internet of Things (IoT), Artificial Intelligence (AI) and Machine Learning (ML), Big Data and other analytics including real-time click stream, media and entertainment, security and surveillance, log and telemetry processing among many others.

What this means is that there are many different application with various character attributes along with resource (server compute, I/O network and memory, storage requirements) along with service requirements.

Common Applications Characteristics

Different applications will have various attributes, in general, as well as how they are used, for example, database transaction activity vs. reporting or analytics, logs and journals vs. redo logs, indices, tables, indices, import/export, scratch and temp space. Performance, availability, capacity, and economics (PACE) describes the applications and data characters and needs shown in the following figure.

Application and data PACE attributes
Application PACE attributes (via Software Defined Data Infrastructure Essentials)

All applications have PACE attributes, however:

  • PACE attributes vary by application and usage
  • Some applications and their data are more active than others
  • PACE characteristics may vary within different parts of an application

Think of applications along with associated data PACE as its personality or how it behaves, what it does, how it does it, and when, along with value, benefit, or cost as well as quality-of-service (QoS) attributes.

Understanding applications in different environments, including data values and associated PACE attributes, is essential for making informed server, storage, I/O decisions and data infrastructure decisions. Data infrastructures decisions range from configuration to acquisitions or upgrades, when, where, why, and how to protect, and how to optimize performance including capacity planning, reporting, and troubleshooting, not to mention addressing budget concerns.

Primary PACE attributes for active and inactive applications and data are:

P – Performance and activity (how things get used)
A – Availability and durability (resiliency and data protection)
C – Capacity and space (what things use or occupy)
E – Economics and Energy (people, budgets, and other barriers)

Some applications need more performance (server computer, or storage and network I/O), while others need space capacity (storage, memory, network, or I/O connectivity). Likewise, some applications have different availability needs (data protection, durability, security, resiliency, backup, business continuity, disaster recovery) that determine the tools, technologies, and techniques to use.

Budgets are also nearly always a concern, which for some applications means enabling more performance per cost while others are focused on maximizing space capacity and protection level per cost. PACE attributes also define or influence policies for QoS (performance, availability, capacity), as well as thresholds, limits, quotas, retention, and disposition, among others.

Performance and Activity (How Resources Get Used)

Some applications or components that comprise a larger solution will have more performance demands than others. Likewise, the performance characteristics of applications along with their associated data will also vary. Performance applies to the server, storage, and I/O networking hardware along with associated software and applications.

For servers, performance is focused on how much CPU or processor time is used, along with memory and I/O operations. I/O operations to create, read, update, or delete (CRUD) data include activity rate (frequency or data velocity) of I/O operations (IOPS). Other considerations include the volume or amount of data being moved (bandwidth, throughput, transfer), response time or latency, along with queue depths.

Activity is the amount of work to do or being done in a given amount of time (seconds, minutes, hours, days, weeks), which can be transactions, rates, IOPs. Additional performance considerations include latency, bandwidth, throughput, response time, queues, reads or writes, gets or puts, updates, lists, directories, searches, pages views, files opened, videos viewed, or downloads.
 
Server, storage, and I/O network performance include:

  • Processor CPU usage time and queues (user and system overhead)
  • Memory usage effectiveness including page and swap
  • I/O activity including between servers and storage
  • Errors, retransmission, retries, and rebuilds

the following figure shows a generic performance example of data being accessed (mixed reads, writes, random, sequential, big, small, low and high-latency) on a local and a remote basis. The example shows how for a given time interval (see lower right), applications are accessing and working with data via different data streams in the larger image left center. Also shown are queues and I/O handling along with end-to-end (E2E) response time.

fundamental server storage I/O
Server I/O performance fundamentals (via Software Defined Data Infrastructure Essentials)

Click here to view a larger version of the above figure.

Also shown on the left in the above figure is an example of E2E response time from the application through the various data infrastructure layers, as well as, lower center, the response time from the server to the memory or storage devices.

Various queues are shown in the middle of the above figure which are indicators of how much work is occurring, if the processing is keeping up with the work or causing backlogs. Context is needed for queues, as they exist in the server, I/O networking devices, and software drivers, as well as in storage among other locations.

Some basic server, storage, I/O metrics that matter include:

  • Queue depth of I/Os waiting to be processed and concurrency
  • CPU and memory usage to process I/Os
  • I/O size, or how much data can be moved in a given operation
  • I/O activity rate or IOPs = amount of data moved/I/O size per unit of time
  • Bandwidth = data moved per unit of time = I/O size × I/O rate
  • Latency usually increases with larger I/O sizes, decreases with smaller requests
  • I/O rates usually increase with smaller I/O sizes and vice versa
  • Bandwidth increases with larger I/O sizes and vice versa
  • Sequential stream access data may have better performance than some random access data
  • Not all data is conducive to being sequential stream, or random
  • Lower response time is better, higher activity rates and bandwidth are better

Queues with high latency and small I/O size or small I/O rates could indicate a performance bottleneck. Queues with low latency and high I/O rates with good bandwidth or data being moved could be a good thing. An important note is to look at several metrics, not just IOPs or activity, or bandwidth, queues, or response time. Also, keep in mind that metrics that matter for your environment may be different from those for somebody else.

Something to keep in perspective is that there can be a large amount of data with low performance, or a small amount of data with high-performance, not to mention many other variations. The important concept is that as space capacity scales, that does not mean performance also improves or vice versa, after all, everything is not the same.

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Keep in mind that with Application Data Value Characteristics Everything Is Not The Same across various organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. However all applications have some element (high or low) of performance, availability, capacity, economic (PACE) along with various similarities. Likewise data has different value at various times. Continue reading the next post (Part II Application Data Availability Everything Is Not The Same) in this five-part mini-series here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Availability 4 3 2 1 Data Protection

Application Data Availability 4 3 2 1 Data Protection

4 3 2 1 data protection Application Data Availability Everything Is Not The Same

Application Data Availability 4 3 2 1 Data Protection

This is part two of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we continue looking at application performance, availability, capacity, economic (PACE) attributes that have an impact on data value as well as availability.

4 3 2 1 data protection  Book SDDC

Availability (Accessibility, Durability, Consistency)

Just as there are many different aspects and focus areas for performance, there are also several facets to availability. Note that applications performance requires availability and availability relies on some level of performance.

Availability is a broad and encompassing area that includes data protection to protect, preserve, and serve (backup/restore, archive, BC, BR, DR, HA) data and applications. There are logical and physical aspects of availability including data protection as well as security including key management (manage your keys or authentication and certificates) and permissions, among other things.

Availability = accessibility (can you get to your application and data) + durability (is the data intact and consistent). This includes basic Reliability, Availability, Serviceability (RAS), as well as high availability, accessibility, and durability. “Durable” has multiple meanings, so context is important. Durable means how data infrastructure resources hold up to, survive, and tolerate wear and tear from use (i.e., endurance), for example, Flash SSD or mechanical devices such as Hard Disk Drives (HDDs). Another context for durable refers to data, meaning how many copies in various places.

Server, storage, and I/O network availability topics include:

  • Resiliency and self-healing to tolerate failure or disruption
  • Hardware, software, and services configured for resiliency
  • Accessibility to reach or be reached for handling work
  • Durability and consistency of data to be available for access
  • Protection of data, applications, and assets including security

Additional server I/O and data infrastructure along with storage topics include:

  • Backup/restore, replication, snapshots, sync, and copies
  • Basic Reliability, Availability, Serviceability, HA, fail over, BC, BR, and DR
  • Alternative paths, redundant components, and associated software
  • Applications that are fault-tolerant, resilient, and self-healing
  • Non disruptive upgrades, code (application or software) loads, and activation
  • Immediate data consistency and integrity vs. eventual consistency
  • Virus, malware, and other data corruption or loss prevention

From a data protection standpoint, the fundamental rule or guideline is 4 3 2 1, which means having at least four copies consisting of at least three versions (different points in time), at least two of which are on different systems or storage devices and at least one of those is off-site (on-line, off-line, cloud, or other). There are many variations of the 4 3 2 1 rule shown in the following figure along with approaches on how to manage technology to use. We will go into deeper this subject in later chapters. For now, remember the following.

large version application server storage I/O
4 3 2 1 data protection (via Software Defined Data Infrastructure Essentials)

4    At least four copies of data (or more), Enables durability in case a copy goes bad, deleted, corrupted, failed device, or site.
3    The number (or more) versions of the data to retain, Enables various recovery points in time to restore, resume, restart from.
2    Data located on two or more systems (devices or media/mediums), Enables protection against device, system, server, file system, or other fault/failure.

1    With at least one of those copies being off-premise and not live (isolated from active primary copy), Enables resiliency across sites, as well as space, time, distance gap for protection.

Capacity and Space (What Gets Consumed and Occupied)

In addition to being available and accessible in a timely manner (performance), data (and applications) occupy space. That space is memory in servers, as well as using available consumable processor CPU time along with I/O (performance) including over networks.

Data and applications also consume storage space where they are stored. In addition to basic data space, there is also space consumed for metadata as well as protection copies (and overhead), application settings, logs, and other items. Another aspect of capacity includes network IP ports and addresses, software licenses, server, storage, and network bandwidth or service time.

Server, storage, and I/O network capacity topics include:

  • Consumable time-expiring resources (processor time, I/O, network bandwidth)
  • Network IP and other addresses
  • Physical resources of servers, storage, and I/O networking devices
  • Software licenses based on consumption or number of users
  • Primary and protection copies of data and applications
  • Active and standby data infrastructure resources and sites
  • Data footprint reduction (DFR) tools and techniques for space optimization
  • Policies, quotas, thresholds, limits, and capacity QoS
  • Application and database optimization

DFR includes various techniques, technologies, and tools to reduce the impact or overhead of protecting, preserving, and serving more data for longer periods of time. There are many different approaches to implementing a DFR strategy, since there are various applications and data.

Common DFR techniques and technologies include archiving, backup modernization, copy data management (CDM), clean up, compress, and consolidate, data management, deletion and dedupe, storage tiering, RAID (including parity-based, erasure codes , local reconstruction codes [LRC] , and Reed-Solomon , Ceph Shingled Erasure Code (SHEC ), among others), along with protection configurations along with thin-provisioning, among others.

DFR can be implemented in various complementary locations from row-level compression in database or email to normalized databases, to file systems, operating systems, appliances, and storage systems using various techniques.

Also, keep in mind that not all data is the same; some is sparse, some is dense, some can be compressed or deduped while others cannot. Likewise, some data may not be compressible or dedupable. However, identical copies can be identified with links created to a common copy.

Economics (People, Budgets, Energy and other Constraints)

If one thing in life and technology that is constant is change, then the other constant is concern about economics or costs. There is a cost to enable and maintain a data infrastructure on premise or in the cloud, which exists to protect, preserve, and serve data and information applications.

However, there should also be a benefit to having the data infrastructure to house data and support applications that provide information to users of the services. A common economic focus is what something costs, either as up-front capital expenditure (CapEx) or as an operating expenditure (OpEx) expense, along with recurring fees.

In general, economic considerations include:

  • Budgets (CapEx and OpEx), both up front and in recurring fees
  • Whether you buy, lease, rent, subscribe, or use free and open sources
  • People time needed to integrate and support even free open-source software
  • Costs including hardware, software, services, power, cooling, facilities, tools
  • People time includes base salary, benefits, training and education

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Keep in mind that with Application Data Value Characteristics Everything Is Not The Same across various organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. All applications have some element of performance, availability, capacity, economic (PACE) needs as well as resource demands. There is often a focus around data storage about storage efficiency and utilization which is where data footprint reduction (DFR) techniques, tools, trends and as well as technologies address capacity requirements. However with data storage there is also an expanding focus around storage effectiveness also known as productivity tied to performance, along with availability including 4 3 2 1 data protection. Continue reading the next post (Part III Application Data Characteristics Types Everything Is Not The Same) in this series here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Characteristics Types Everything Is Not The Same

Application Data Characteristics Types Everything Is Not The Same

Application Data Characteristics Types Everything Is Not The Same

Application Data Characteristics Types Everything Is Not The Same

This is part three of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we continue looking at application and data characteristics with a focus on different types of data. There is more to data than simply being big data, fast data, big fast or unstructured, structured or semistructured, some of which has been touched on in this series, with more to follow. Note that there is also data in terms of the programs, applications, code, rules, policies as well as configuration settings, metadata along with other items stored.

Application Data Value Software Defined Data Infrastructure Essentials Book SDDC

Various Types of Data

Data types along with characteristics include big data, little data, fast data, and old as well as new data with a different value, life-cycle, volume and velocity. There are data in files and objects that are big representing images, figures, text, binary, structured or unstructured that are software defined by the applications that create, modify and use them.

There are many different types of data and applications to meet various business, organization, or functional needs. Keep in mind that applications are based on programs which consist of algorithms and data structures that define the data, how to use it, as well as how and when to store it. Those data structures define data that will get transformed into information by programs while also being stored in memory and on data stored in various formats.

Just as various applications have different algorithms, they also have different types of data. Even though everything is not the same in all environments, or even how the same applications get used across various organizations, there are some similarities. Even though there are different types of applications and data, there are also some similarities and general characteristics. Keep in mind that information is the result of programs (applications and their algorithms) that process data into something useful or of value.

Data typically has a basic life cycle of:

  • Creation and some activity, including being protected
  • Dormant, followed by either continued activity or going inactive
  • Disposition (delete or remove)

In general, data can be

  • Temporary, ephemeral or transient
  • Dynamic or changing (“hot data”)
  • Active static on-line, near-line, or off-line (“warm-data”)
  • In-active static on-line or off-line (“cold data”)

Data is organized

  • Structured
  • Semi-structured
  • Unstructured

General data characteristics include:

  • Value = From no value to unknown to some or high value
  • Volume = Amount of data, files, objects of a given size
  • Variety = Various types of data (small, big, fast, structured, unstructured)
  • Velocity = Data streams, flows, rates, load, process, access, active or static

The following figure shows how different data has various values over time. Data that has no value today or in the future can be deleted, while data with unknown value can be retained.

Different data with various values over time

Application Data Value across sddc
Data Value Known, Unknown and No Value

General characteristics include the value of the data which in turn determines its performance, availability, capacity, and economic considerations. Also, data can be ephemeral (temporary) or kept for longer periods of time on persistent, non-volatile storage (you do not lose the data when power is turned off). Examples of temporary scratch include work and scratch areas such as where data gets imported into, or exported out of, an application or database.

Data can also be little, big, or big and fast, terms which describe in part the size as well as volume along with the speed or velocity of being created, accessed, and processed. The importance of understanding characteristics of data and how their associated applications use them is to enable effective decision-making about performance, availability, capacity, and economics of data infrastructure resources.

Data Value

There is more to data storage than how much space capacity per cost.

All data has one of three basic values:

  • No value = ephemeral/temp/scratch = Why keep it?
  • Some value = current or emerging future value, which can be low or high = Keep
  • Unknown value = protect until value is unlocked, or no remaining value

In addition to the above basic three, data with some value can also be further subdivided into little value, some value, or high value. Of course, you can keep subdividing into as many more or different categories as needed, after all, everything is not always the same across environments.

Besides data having some value, that value can also change by increasing or decreasing in value over time or even going from unknown to a known value, known to unknown, or to no value. Data with no value can be discarded, if in doubt, make and keep a copy of that data somewhere safe until its value (or lack of value) is fully known and understood.

The importance of understanding the value of data is to enable effective decision-making on where and how to protect, preserve, and cost-effectively store the data. Note that cost-effective does not necessarily mean the cheapest or lowest-cost approach, rather it means the way that aligns with the value and importance of the data at a given point in time.

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software-defined data center (SDDC), software-defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Data has different value at various times, and that value is also evolving. Everything Is Not The Same across various organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. Continue reading the next post (Part IV Application Data Volume Velocity Variety Everything Not The Same) in this series here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Volume Velocity Variety Everything Is Not The Same

Application Data Volume Velocity Variety Everything Not The Same

Application Data Volume Velocity Variety Everything Not The Same

This is part four of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we continue looking at application and data characteristics with a focus on data volume velocity and variety, after all, everything is not the same, not to mention many different aspects of big data as well as little data.

Application Data Value Software Defined Data Infrastructure Essentials Book SDDC

Volume of Data

More data is growing at a faster rate every day, and that data is being retained for longer periods. Some data being retained has known value, while a growing amount of data has an unknown value. Data is generated or created from many sources, including mobile devices, social networks, web-connected systems or machines, and sensors including IoT and IoD. Besides where data is created from, there are also many consumers of data (applications) that range from legacy to mobile, cloud, IoT among others.

Unknown-value data may eventually have value in the future when somebody realizes that he can do something with it, or a technology tool or application becomes available to transform the data with unknown value into valuable information.

Some data gets retained in its native or raw form, while other data get processed by application program algorithms into summary data, or is curated and aggregated with other data to be transformed into new useful data. The figure below shows, from left to right and front to back, more data being created, and that data also getting larger over time. For example, on the left are two data items, objects, files, or blocks representing some information.

In the center of the following figure are more columns and rows of data, with each of those data items also becoming larger. Moving farther to the right, there are yet more data items stacked up higher, as well as across and farther back, with those items also being larger. The following figure can represent blocks of storage, files in a file system, rows, and columns in a database or key-value repository, or objects in a cloud or object storage system.

Application Data Value sddc
Increasing data velocity and volume, more data and data getting larger

In addition to more data being created, some of that data is relatively small in terms of the records or data structure entities being stored. However, there can be a large quantity of those smaller data items. In addition to the amount of data, as well as the size of the data, protection or overhead copies of data are also kept.

Another dimension is that data is also getting larger where the data structures describing a piece of data for an application have increased in size. For example, a still photograph was taken with a digital camera, cell phone, or another mobile handheld device, drone, or other IoT device, increases in size with each new generation of cameras as there are more megapixels.

Variety of Data

In addition to having value and volume, there are also different varieties of data, including ephemeral (temporary), persistent, primary, metadata, structured, semi-structured, unstructured, little, and big data. Keep in mind that programs, applications, tools, and utilities get stored as data, while they also use, create, access, and manage data.

There is also primary data and metadata, or data about data, as well as system data that is also sometimes referred to as metadata. Here is where context comes into play as part of tradecraft, as there can be metadata describing data being used by programs, as well as metadata about systems, applications, file systems, databases, and storage systems, among other things, including little and big data.

Context also matters regarding big data, as there are applications such as statistical analysis software and Hadoop, among others, for processing (analyzing) large amounts of data. The data being processed may not be big regarding the records or data entity items, but there may be a large volume. In addition to big data analytics, data, and applications, there is also data that is very big (as well as large volumes or collections of data sets).

For example, video and audio, among others, may also be referred to as big fast data, or large data. A challenge with larger data items is the complexity of moving over the distance promptly, as well as processing requiring new approaches, algorithms, data structures, and storage management techniques.

Likewise, the challenges with large volumes of smaller data are similar in that data needs to be moved, protected, preserved, and served cost-effectively for long periods of time. Both large and small data are stored (in memory or storage) in various types of data repositories.

In general, data in repositories is accessed locally, remotely, or via a cloud using:

  • Object and blobs stream, queue, and Application Programming Interface (API)
  • File-based using local or networked file systems
  • Block-based access of disk partitions, LUNs (logical unit numbers), or volumes

The following figure shows varieties of application data value including (left) photos or images, audio, videos, and various log, event, and telemetry data, as well as (right) sparse and dense data.

Application Data Value bits bytes blocks blobs bitstreams sddc
Varieties of data (bits, bytes, blocks, blobs, and bitstreams)

Velocity of Data

Data, in addition to having value (known, unknown, or none), volume (size and quantity), and variety (structured, unstructured, semi structured, primary, metadata, small, big), also has velocity. Velocity refers to how fast (or slowly) data is accessed, including being stored, retrieved, updated, scanned, or if it is active (updated, or fixed static) or dormant and inactive. In addition to data access and life cycle, velocity also refers to how data is used, such as random or sequential or some combination. Think of data velocity as how data, or streams of data, flow in various ways.

Velocity also describes how data is used and accessed, including:

  • Active (hot), static (warm and WORM), or dormant (cold)
  • Random or sequential, read or write-accessed
  • Real-time (online, synchronous) or time-delayed

Why this matters is that by understanding and knowing how applications use data, or how data is accessed via applications, you can make informed decisions. Also, having insight enables how to design, configure, and manage servers, storage, and I/O resources (hardware, software, services) to meet various needs. Understanding Application Data Value including the velocity of the data both for when it is created as well as when used is important for aligning the applicable performance techniques and technologies.

Where to learn more

Learn more about Application Data Value, application characteristics, performance, availability, capacity, economic (PACE) along with data protection, software-defined data center (SDDC), software-defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Data has different value, size, as well as velocity as part of its characteristic including how used by various applications. Keep in mind that with Application Data Value Characteristics Everything Is Not The Same across various organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. Continue reading the next post (Part V Application Data Access life cycle Patterns Everything Is Not The Same) in this series here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Application Data Access Lifecycle Patterns Everything Is Not The Same

Application Data Access Life cycle Patterns Everything Is Not The Same(Part V)

Application Data Access Life cycle Patterns Everything Is Not The Same

Application Data Access Life cycle Patterns Everything Is Not The Same

This is part five of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we look at various application and data lifecycle patterns as well as wrap up this series.

Application Data Value Software Defined Data Infrastructure Essentials Book SDDC

Active (Hot), Static (Warm and WORM), or Dormant (Cold) Data and Lifecycles

When it comes to Application Data Value, a common question I hear is why not keep all data?

If the data has value, and you have a large enough budget, why not? On the other hand, most organizations have a budget and other constraints that determine how much and what data to retain.

Another common question I get asked (or told) it isn’t the objective to keep less data to cut costs?

If the data has no value, then get rid of it. On the other hand, if data has value or unknown value, then find ways to remove the cost of keeping more data for longer periods of time so its value can be realized.

In general, the data life cycle (called by some cradle to grave, birth or creation to disposition) is created, save and store, perhaps update and read with changing access patterns over time, along with value. During that time, the data (which includes applications and their settings) will be protected with copies or some other technique, and eventually disposed of.

Between the time when data is created and when it is disposed of, there are many variations of what gets done and needs to be done. Considering static data for a moment, some applications and their data, or data and their applications, create data which is for a short period, then goes dormant, then is active again briefly before going cold (see the left side of the following figure). This is a classic application, data, and information life-cycle model (ILM), and tiering or data movement and migration that still applies for some scenarios.

Application Data Value
Changing data access patterns for different applications

However, a newer scenario over the past several years that continues to increase is shown on the right side of the above figure. In this scenario, data is initially active for updates, then goes cold or WORM (Write Once/Read Many); however, it warms back up as a static reference, on the web, as big data, and for other uses where it is used to create new data and information.

Data, in addition to its other attributes already mentioned, can be active (hot), residing in a memory cache, buffers inside a server, or on a fast storage appliance or caching appliance. Hot data means that it is actively being used for reads or writes (this is what the term Heat map pertains to in the context of the server, storage data, and applications. The heat map shows where the hot or active data is along with its other characteristics.

Context is important here, as there are also IT facilities heat maps, which refer to physical facilities including what servers are consuming power and generating heat. Note that some current and emerging data center infrastructure management (DCIM) tools can correlate the physical facilities power, cooling, and heat to actual work being done from an applications perspective. This correlated or converged management view enables more granular analysis and effective decision-making on how to best utilize data infrastructure resources.

In addition to being hot or active, data can be warm (not as heavily accessed) or cold (rarely if ever accessed), as well as online, near-line, or off-line. As their names imply, warm data may occasionally be used, either updated and written, or static and just being read. Some data also gets protected as WORM data using hardware or software technologies. WORM (immutable) data, not to be confused with warm data, is fixed or immutable (cannot be changed).

When looking at data (or storage), it is important to see when the data was created as well as when it was modified. However, you should avoid the mistake of looking only at when it was created or modified: Instead, also look to see when it was the last read, as well as how often it is read. You might find that some data has not been updated for several years, but it is still accessed several times an hour or minute. Also, keep in mind that the metadata about the actual data may be being updated, even while the data itself is static.

Also, look at your applications characteristics as well as how data gets used, to see if it is conducive to caching or automated tiering based on activity, events, or time. For example, there is a large amount of data for an energy or oil exploration project that normally sits on slower lower-cost storage, but that now and then some analysis needs to run on.

Using data and storage management tools, given notice or based on activity, which large or big data could be promoted to faster storage, or applications migrated to be closer to the data to speed up processing. Another example is weekly, monthly, quarterly, or year-end processing of financial, accounting, payroll, inventory, or enterprise resource planning (ERP) schedules. Knowing how and when the applications use the data, which is also understanding the data, automated tools, and policies, can be used to tier or cache data to speed up processing and thereby boost productivity.

All applications have performance, availability, capacity, economic (PACE) attributes, however:

  • PACE attributes vary by Application Data Value and usage
  • Some applications and their data are more active than others
  • PACE characteristics may vary within different parts of an application
  • PACE application and data characteristics along with value change over time

Read more about Application Data Value, PACE and application characteristics in Software Defined Data Infrastructure Essentials (CRC Press 2017).

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Keep in mind that Application Data Value everything is not the same across various organizations, data centers, data infrastructures, data and the applications that use them.

Also keep in mind that there is more data being created, the size of those data items, files, objects, entities, records are also increasing, as well as the speed at which they get created and accessed. The challenge is not just that there is more data, or data is bigger, or accessed faster, it’s all of those along with changing value as well as diverse applications to keep in perspective. With new Global Data Protection Regulations (GDPR) going into effect May 25, 2018, now is a good time to assess and gain insight into what data you have, its value, retention as well as disposition policies.

Remember, there are different data types, value, life-cycle, volume and velocity that change over time, and with Application Data Value Everything Is Not The Same, so why treat and manage everything the same?

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

World Backup Day 2018 Data Protection Readiness Reminder

World Backup Day 2018 Data Protection Readiness Reminder

server storage I/O trends

It’s that time of year again, World Backup Day 2018 Data Protection Readiness Reminder.

In case you have forgotten, or were not aware, this coming Saturday March 31 is World Backup (and recovery day). The annual day is a to remember to make sure you are protecting your applications, data, information, configuration settings as well as data infrastructures. While the emphasis is on Backup, that also means recovery as well as testing to make sure everything is working properly.

data infrastructure data protection

Its time that the focus of world backup day should expand from just a focus on backup to also broader data protection and things that start with R. Some data protection (and backup) related things, tools, tradecraft techniques, technologies and trends that start with R include readiness, recovery, reconstruct, restore, restart, resume, replication, rollback, roll forward, RAID and erasure codes, resiliency, recovery time objective (RTO), recovery point objective (RPO), replication among others.

data protection threats ransomware software defined

Keep in mind that Data Protection is a broader focus than just backup and recovery. Data protection includes disaster recovery DR, business continuance BC, business resiliency BR, security (logical and physical), standard and high availability HA, as well as durability, archiving, data footprint reduction, copy data management CDM along with various technologies, tradecraft techniques, tools.

data protection 4 3 2 1 rule and 3 2 1 rule

Quick Data Protection, Backup and Recovery Checklist

  • Keep the 4 3 2 1 or shorter older 3 2 1 data protection rules in mind
  • Do you know what data, applications, configuration settings, meta data, keys, certificates are being protected?
  • Do you know how many versions, copies, where stored and what is on or off-site, on or off-line?
  • Implement data protection at different intervals and coverage of various layers (application, transaction, database, file system, operating system, hypervisors, device or volume among others)
  • data infrastructure backup data protection

  • Have you protected your data protection environment including software, configuration, catalogs, indexes, databases along with management tools?
  • Verify that data protection point in time copies (backups, snapshots, consistency points, checkpoints, version, replicas) are working as intended
  • Make sure that not only are the point in time protection copies running when scheduled, also that they are protected what’s intended
  • data infrastructure backup data protection

  • Test to see if the protection copies can actually be used, this means restoring as well as accessing the data via applications
  • Watch out to prevent a disaster in the course of testing, plan, prepare, practice, learn, refine, improve
  • In addition to verifying your data protection (backup, bc, dr) for work, also take time to see how your home or personal data is protected
  • View additional tips, techniques, checklist items in this Data Protection fundamentals series of posts here.

storageio data protection toolbox

Where To Learn More

View additional Data Infrastructure Data Protection and related tools, trends, technology and tradecraft skills topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

You can not go forward if you can not go back to a particular point in time (e.g. recovery point objective or RPO). Likewise, if you can not go back to a given RPO, how can you go forward with your business as well as meet your recovery time objective (RTO)?

data protection restore rto rpo

Backup is as important as restore, without a good backup or data protection point in time copy, how can you restore? Some will say backup is more important than recovery, however its the enablement that matters, in other words being able to provide data protection and recover, restart, resume or other things that start with R. World backup day should be a reminder to think about broader data protection which also means recovery, restore and realizing if your copies and versions are good. Keep the above in mind and this is your World Backup Day 2018 Data Protection Readiness Reminder.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

How to Achieve Flexible Data Protection Availability with All Flash Storage Solutions

Achieve Flexible Data Protection Availability with All Flash Solutions

server storage I/O data infrastructure trends

Updated 1/21/2018

How to Achieve Flexible flash data protection and Availability with All-Flash Storage Solutions

Interactive webinar discussion (not death by power point or Ui Gui product demo ;) pertaining flash data protection )
Tuesday January 30 2018 11AM PT / 2PM ET
Via Redmond Magazine (Free with registration)

Everything is not the same across different organizations, environments, application workloads and the data infrastructures that support them. Fast application and workloads need fast protection, restoration, and resumption as well as fast flash storage. This applies across legacy, software-defined, virtual, container, cloud, hybrid, converged and HCI among other environments.

SDDC Data Infrastructure Data Protection

Join me along with representatives from Pure Storage along with Veeam for this interactive discussion as we explore how to boost the performance, availability, capacity, and economics (PACE) of your applications along with the data infrastructures that support them.

  • How all-flash storage enables faster protection and restoration of fast applications
  • Why data protection and availability should not be an afterthought
  • Ways to leverage your data protection storage to drive business change
  • How to simplify and reduce complexity to boost productivity while lowering costs
  • Why workload aggregation consolidation should not cause aggravation

Register for the live event or catch the replay here.

Where to learn more

Learn more about data protection, SSD, flash, data infrastructure and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Fast applications need fast and resilient data infrastructures that include server, storage, I/O networking along with data protection. Likewise performance depends on availability along with durability, likewise, availability and accessibility depend on performance, they go hand in hand. Join me and others from Pure Storage as well as Veeam for this conversational discussion about How to Achieve Flexible Data Protection and Availability with All-Flash Storage Solutions.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.