VMware continues cloud construction with March announcements

VMware continues cloud construction with March announcements

VMware continues cloud construction sddc

VMware continues cloud construction with March announcements of new features and other enhancements.

VMware continues cloud construction SDDC data infrastructure strategy big picture
VMware Cloud Provides Consistent Operations and Infrastructure Via: VMware.com

With its recent announcements, VMware continues cloud construction adding new features, enhancements, partnerships along with services.

VMware continues cloud construction, like other vendors and service providers who tried and test the waters of having their own public cloud, VMware has moved beyond its vCloud Air initiative selling that to OVH. VMware which while being a public traded company (VMW) is by way of majority ownership part of the Dell Technologies family of company via the 2016 acquisition of EMC by Dell. What this means is that like Dell Technologies, VMware is focused on providing solutions and services to its cloud provider partners instead of building, deploying and running its own cloud in competition with partners.

VMware continues cloud construction SDDC data infrastructure strategy layers
VMware Cloud Data Infrastructure and SDDC layers Via: VMware.com

The VMware Cloud message and strategy is focused around providing software solutions to cloud and other data infrastructure partners (and customers) instead of competing with them (e.g. divesting of vCloud Air, partnering with AWS, IBM Softlayer). Part of the VMware cloud message and strategy is to provide consistent operations and management across clouds, containers, virtual machines (VM) as well as other software defined data center (SDDC) and software defined data infrastructures.

In other words, what this means is VMware providing consistent management to leverage common experiences of data infrastructure staff along with resources in a hybrid, cross cloud and software defined environment in support of existing as well as cloud native applications.

VMware continues cloud construction on AWS SDDC

VMware Cloud on AWS Image via: AWS.com

Note that VMware Cloud services run on top of AWS EC2 bare metal (BM) server instances, as well as on BM instances at IBM softlayer as well as OVH. Learn more about AWS EC2 BM compute instances aka Metal as a Service (MaaS) here. In addition to AWS, IBM and OVH, VMware claims over 4,000 regional cloud and managed service providers who have built their data infrastructures out using VMware based technologies.

VMware continues cloud construction updates

Building off of previous announcements, VMware continues cloud construction with enhancements to their Amazon Web Services (AWS) partnership along with services for IBM Softlayer cloud as well as OVH. As a refresher, OVH is what formerly was known as VMware vCloud air before it was sold off.

Besides expanding on existing cloud partner solution offerings, VMware also announced additional cloud, software defined data center (SDDC) and other software defined data infrastructure environment management capabilities. SDDC and Data infrastructure management tools include leveraging VMwares acquisition of Wavefront among others.

VMware Cloud Updates and New Features

  • VMware Cloud on AWS European regions (now in London, adding Frankfurt German)
  • Stretch Clusters with synchronous replication for cross geography location resiliency
  • Support for data intensive workloads including data footprint reduction (DFR) with vSAN based compression and data de duplication
  • Fujitsu services offering relationships
  • Expanded VMware Cloud Services enhancements

VMware Cloud Services enhancements include:

  • Hybrid Cloud Extension
  • Log intelligence
  • Cost insight
  • Wavefront

VMware Cloud in additional AWS Regions

As part of service expansion, VMware Cloud on AWS has been extended into European region (London) with plans to expand into Frankfurt and an Asian Pacific location. Previously VMware Cloud on AWS has been available in US West Oregon and US East Northern Virginia regions. Learn more about AWS Regions and availability zones (AZ) here.

VMware Cloud Stretch Cluster

VMware Cloud on AWS Stretch Clusters Source: VMware.com

VMware Cloud on AWS Stretch Clusters

In addition to expanding into additional regions, VMware Cloud on AWS is also being extended with stretch clusters for geography dispersed protection. Stretched clusters provide protection against an AZ failure (e.g. data center site) for mission critical applications. Build on vSphere HA and DRS  automated host failure technology, stretched clusters provide recovery point objective zero (RPO 0) for continuous protection, high availability across AZs at the data infrastructure layer.

The benefit of data infrastructure layer based HA and resiliency is not having to re architect or modify upper level, higher up layered applications or software. Synchronous replication between AZs enables RPO 0, if one AZ goes down, it is treated as a vSphere HA event with VMs restarted in another AZ.

vSAN based Data Footprint Reduction (DFR) aka Compression and De duplication

To support applications that leverage large amounts of data, aka data intensive applications in marketing speak, VMware is leveraging vSAN based data footprint reduction (DFR) techniques including compression as well as de duplication (dedupe). Leveraging DFR technologies like compression and dedupe integrated into vSAN, VMware Clouds have the ability to store more data in a given cubic density. Storing more data in a given cubic density storage efficiency (e.g. space saving utilization) as well as with performance acceleration, also facilitate storage effectiveness along with productivity.

With VMware vSAN technology as one of the core underlying technologies for enabling VMware Cloud on AWS (among other deployments), applications with large data needs can store more data at a lower cost point. Note that VMware Cloud can support 10 clusters per SDDC deployment, with each cluster having 32 nodes, with cluster wide and aware dedupe. Also note that for performance, VMware Cloud on AWS leverages NVMe attached Solid State Devices (SSD) to boost effectiveness and productivity.

VMware Hybrid Cloud Extension

Extending VMware vSphere any to any migration across clouds Source: VMware.com

VMware Hybrid Cloud Extension

VMware Hybrid Cloud Extension enables common management of common underlying data infrastructure as well as software defined environments including across public, private as well as hybrid clouds. Some of the capabilities include enabling warm VM migration across various software defined environments from local on-premises and private cloud to public clouds.

New enhancements leverages previously available technology now as a service for enterprises besides service providers to support data center to data center, or cloud centric AZ to AZ, as well as region to region migrations. Some of the use cases include small to large bulk migrations of hundreds to thousands of VM move and migrations, both scheduling as well as the actual move. Move and migrations can span hybrid deployments with mix of on-premises as well as various cloud services.

VMware Cloud Cost Insight

VMware Cost Insight enables analysis, compare cloud costs across public AWS, Azure and private VMware clouds) to avoid flying blind in and among clouds. VMware Cloud cost insight enables awareness of how resources are used, their cost and benefit to applications as well as IT budget impacts. Integrates vSAN sizer tool along with AWS metrics for improved situational awareness, cost modeling, analysis and what if comparisons.

With integration to Network insight, VMware Cloud Cost Insight also provides awareness into networking costs in support of migrations. What this means is that using VMware Cloud Cost insight you can take the guess-work out of what your expenses will be for public, private on-premisess or hybrid cloud will be having deeper insight awareness into your SDDC environment. Learn more about VVMware Cost Insight here.

VMware Log Intelligence

Log Intelligence is a new VMware cloud service that provides real-time data infrastructure insight along with application visibility from private, on-premises, to public along with hybrid clouds. As its name implies, Log Intelligence provides syslog and other log insight, analysis and intelligence with real-time visibility into VMware as well as AWS among other resources for faster troubleshooting, diagnostics, event correlation and other data infrastructure management tasks.

Log and telemetry input sources for VMware Log Intelligence include data infrastructure resources such as operating systems, servers, system statistics, security, applications among other syslog events. For those familiar with VMware Log Insight, this capability is an extension of that known experience expanding it to be a cloud based service.

VMware Wavefront SaaS analytics
Wavefront by VMware Source: VMware.com

VMware Wavefront

VMware Wavefront enables monitoring of cloud native high scale environments with custom metrics and analytics. As a reminder Wavefront was acquired by VMware to enable deep metrics and analytics for developers, DevOps, data infrastructure operations as well as SaaS application developers among others. Wavefront integrates with VMware vRealize along with enabling monitoring of AWS data infrastructure resources and services. With the ability to ingest, process, analyze various data feeds, the Wavefront engine enables the predictive understanding of mixed application, cloud native data and data infrastructure platforms including big data based.

Where to learn more

Learn more about VMware, vSphere, vRealize, VMware Cloud, AWS (and other clouds), along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI) and related topics via the following links:

SDDC Data Infrastructure

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

VMware continues cloud construction. For now, it appears that VMware like Dell Technologies is content on being a technology provider partner to large as well as small public, private and hybrid cloud environments instead of building their own and competing. With these series of announcements, VMware continues cloud construction enabling its partners and customers on their various software defined data center (SDDC) and related data infrastructure journeys. Overall, this is a good set of enhancements, updates, new and evolving features for their partners as well as customers who leverage VMware based technologies. Meanwhile VMware continues cloud construction.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Cloud Conversations AWS Azure Service Maps via Microsoft

Cloud Conversations AWS Azure Service Maps via Microsoft

server storage I/O data infrastructure trends

Updated 1/21/2018

Microsoft has created an Amazon Web Service AWS Azure Service Map. The AWS Azure Service Map is a list created by Microsoft looks at corresponding services of both cloud providers.

Azure AWS service map via Microsoft.com
Image via Azure.Microsoft.com

Note that this is an evolving work in progress from Microsoft and use it as a tool to help position the different services from Azure and AWS.

Also note that not all features or services may not be available in different regions, visit Azure and AWS sites to see current availability.

As with any comparison they are often dated the day they are posted hence this is a work in progress. If you are looking for another Microsoft created why Azure vs. AWS then check out this here. If you are looking for an AWS vs. Azure, do a simple Google (or Bing) search and watch all the various items appear, some sponsored, some not so sponsored among others.

Whats In the Service Map

The following AWS and Azure services are mapped:

  • Marketplace (e.g. where you select service offerings)
  • Compute (Virtual Machines instances, Containers, Virtual Private Servers, Serverless Microservices and Management)
  • Storage (Primary, Secondary, Archive, Premium SSD and HDD, Block, File, Object/Blobs, Tables, Queues, Import/Export, Bulk transfer, Backup, Data Protection, Disaster Recovery, Gateways)
  • Network & Content Delivery (Virtual networking, virtual private networks and virtual private cloud, domain name services (DNS), content delivery network (CDN), load balancing, direct connect, edge, alerts)
  • Database (Relational, SQL and NoSQL document and key value, caching, database migration)
  • Analytics and Big Data (data warehouse, data lake, data processing, real-time and batch, data orchestration, data platforms, analytics)
  • Intelligence and IoT (IoT hub and gateways, speech recognition, visualization, search, machine learning, AI)
  • Management and Monitoring (management, monitoring, advisor, DevOps)
  • Mobile Services (management, monitoring, administration)
  • Security, Identity and Access (Security, directory services, compliance, authorization, authentication, encryption, firewall
  • Developer Tools (workflow, messaging, email, API management, media trans coding, development tools, testing, DevOps)
  • Enterprise Integration (application integration, content management)

Down load a PDF version of the service map from Microsoft here.

Where To Learn More

Learn more about related technology, trends, tools, techniques, and tips with the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

On one hand this can and will likely be used as a comparison however use caution as both Azure and AWS services are rapidly evolving, adding new features, extending others. Likewise the service regions and site of data centers also continue to evolve thus use the above as a general guide or tool to help map what service offerings are similar between AWS and Azure.

By the way, if you have not heard, its Blogtober, check out some of the other blogs and posts occurring during October here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Chelsio Storage over IP and other Networks Enable Data Infrastructures

Chelsio Storage over IP Enable Data Infrastructures

server storage I/O data infrastructure trends

Chelsio and Storage over IP (SoIP) continue to enable Data Infrastructures from legacy to software defined virtual, container, cloud as well as converged. This past week I had a chance to visit with Chelsio to discuss data infrastructures, server storage I/O networking along with other related topics. More on Chelsio later in this post, however, for now lets take a quick step back and refresh what is SoIP (Storage over IP) along with Storage over Ethernet (among other networks).

Data Infrastructures Protect Preserve Secure and Serve Information
Various IT and Cloud Infrastructure Layers including Data Infrastructures

Server Storage over IP Revisited

There are many variations of SoIP from network attached storage (NAS) file based processing including NFS, SAMBA/SMB (aka Windows File sharing) among others. In addition there is various block such as SCSI over IP (e.g. iSCSI), along with object via HTTP/HTTPS, not to mention the buzzword bingo list of RoCE, iSER, iWARP, RDMA, DDPK, FTP, FCoE, IFCP, and SMB3 direct to name a few.

Who is Chelsio

For those who are not aware or need a refresher, Chelsio is involved with enabling server storage I/O by creating ASICs (Application Specific Integrated Circuits) that do various functions offloading those from the host server processor. What this means for some is a throw back to the early 2000s of the TCP Offload Engine (TOE) era where various processing to handle regular along with iSCSI and other storage over Ethernet and IP could be accelerated.

Chelsio data infrastructure focus

Chelsio ecosystem across different data infrastructure focus areas and application workloads

As seen in the image above, certainly there is a server and storage I/O network play with Chelsio, along with traffic management, packet inspection, security (encryption, SSL and other offload), traditional, commercial, web, high performance compute (HPC) along with high profit or productivity compute (the other HPC). Chelsio also enables data infrastructures that are part of physical bare metal (BM), software defined virtual, container, cloud, serverless among others.

Chelsio server storage I/O focus

The above image shows how Chelsio enables initiators on server and storage appliances as well as targets via various storage over IP (or Ethernet) protocols.

Chelsio enabling various data center resources

Chelsio also plays in several different sectors from *NIX to Windows, Cloud to Containers, Various processor architectures and hypervisors.

Chelsio ecosystem

Besides diverse server storage I/O enabling capabilities across various data infrastructure environments, what caught my eye with Chelsio is how far they, and storage over IP have progressed over the past decade (or more). Granted there are faster underlying networks today, however the offload and specialized chip sets (e.g. ASICs) have also progressed as seen in the above and next series of images via Chelsio.

The above showing TCP and UDP acceleration, the following show Microsoft SMB 3.1.1 performance something important for doing Storage Spaces Direct (S2D) and Windows-based Converged Infrastructure (CI) along with Hyper Converged Infrastructures (HCI) deployments.

Chelsio software environments

Something else that caught my eye was iSCSI performance which in the following shows 4 initiators accessing a single target doing about 4 million IOPs (reads and writes), various size and configurations. Granted that is with a 100Gb network interface, however it also shows that potential bottlenecks are removed enabling that faster network to be more effectively used.

Chelsio server storage I/O performance

Moving on from TCP, UDP and iSCSI, NVMe and in particular NVMe over Fabric (NVMeoF) have become popular industry topics so check out the following. One of my comments to Chelsio is to add host or server CPU usage to the following chart to help show the story and value proposition of NVMe in general to do more I/O activity while consuming less server-side resources. Lets see what they put out in the future.

Chelsio

Ok, so Chelsio does storage over IP, storage over Ethernet and other interfaces accelerating performance, as well as regular TCP and UDP activity. One of the other benefits of what Chelsio and others are doing with their ASICs (or FPGA by some) is to also offload processing for security among other topics. Given the increased focus around server storage I/O and data infrastructure security from encryption to SSL and related usage that requires more resources, these new ASIC such as from Chelsio help to offload various specialized processing from the server.

The customer benefit is that more productive application work can be done by their servers (or storage appliances). For example, if you have a database server, that means more product ivy data base transactions per second per licensed software. Put another way, want to get more value out of your Oracle, Microsoft or other vendors software licenses, simple, get more work done per server that is licensed by offloading and eliminate waits or other bottlenecks.

Using offloads and removing server bottlenecks might seem like common sense however I’m still amazed that the number of organizations who are more focused on getting extra value out of their hardware vs. getting value out of their software licenses (which might be more expensive).

Chelsio

Where To Learn More

Learn more about related technology, trends, tools, techniques, and tips with the following links.

Data Infrastructures Protect Preserve Secure and Serve Information
Various IT and Cloud Infrastructure Layers including Data Infrastructures

What This All Means

Data Infrastructures exist to protect, preserve, secure and serve information along with the applications and data they depend on. With more data being created at a faster rate, along with the size of data becoming larger, increased application functionality to transform data into information means more demands on data infrastructures and their underlying resources.

This means more server I/O to storage system and other servers, along with increased use of SoIP as well as storage over Ethernet and other interfaces including NVMe. Chelsio (and others) are addressing the various application and workload demands by enabling more robust, productive, effective and efficient data infrastructures.

Check out Chelsio and how they are enabling storage over IPO (SoIP) to enable Data Infrastructures from legacy to software defined virtual, container, cloud as well as converged, oh, and thanks Chelsio for being able to use the above images.

Ok, nuff said, for now.
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

New family of Intel Xeon Scalable Processors enable software defined data infrastructures (SDDI) and SDDC

Intel Xeon Scalable Processors SDDI and SDDC

server storage I/O data infrastructure trends

Today Intel announced a new family of Xeon Scalable Processors (aka Purely) that for some workloads Intel claims to be on average of 1.65x faster than their predecessors. Note your real improvement will vary based on workload, configuration, benchmark testing, type of processor, memory, and many other server storage I/O performance considerations.

Intel Scalable Xeon Processors
Image via Intel.com

In general the new Intel Xeon Scalable Processors enable legacy and software defined data infrastructures (SDDI), along with software defined data centers (SDDC), cloud and other environments to support expanding workloads more efficiently as well as effectively (e.g. boosting productivity).

Data Infrastructures and workloads

Some target application and environment workloads Intel is positioning these new processors for includes among others:

  • Machine Learning (ML), Artificial Intelligence (AI), advanced analytics, deep learning and big data
  • Networking including software defined network (SDN) and network function virtualization (NFV)
  • Cloud and Virtualization including Azure Stack, Docker and Kubernetes containers, Hyper-V, KVM, OpenStack VMware vSphere, KVM among others
  • High Performance Compute (HPC) and High Productivity Compute (e.g. the other HPC)
  • Storage including legacy and emerging software defined storage software deployed as appliances, systems or server less deployment modes.

Features of the new Intel Xeon Scalable Processors include:

  • New core micro architecture with interconnects and on die memory controllers
  • Sockets (processors) scalable up to 28 cores
  • Improved networking performance using Quick Assist and Data Plane Development Kit (DPDK)
  • Leverages Intel Quick Assist Technology for CPU offload of compute intensive functions including I/O networking, security, AI, ML, big data, analytics and storage functions. Functions that benefit from Quick Assist include cryptography, encryption, authentication, cipher operations, digital signatures, key exchange, loss less data compression and data footprint reduction along with data at rest encryption (DARE).
  • Optane Non-Volatile Dual Inline Memory Module (NVDIMM) for storage class memory (SCM) also referred to by some as Persistent Memory (PM), not to be confused with Physical Machine (PM).
  • Supports Advanced Vector Extensions 512  (AVX-512) for HPC and other workloads
  • Optional Omni-Path Fabrics in addition to 1/10Gb Ethernet among other I/O options
  • Six memory channels supporting up to 6TB of RDIMM with multi socket systems
  • From two to eight  sockets per node (system)
  • Systems support PCIe 3.x (some supporting x4 based M.2 interconnects)

Note that exact speeds, feeds, slots and watts will vary by specific server model and vendor options. Also note that some server system solutions have two or more nodes (e.g. two or more real servers) in a single package not to be confused with two or more sockets per node (system or motherboard). Refer to the where to learn more section below for links to Intel benchmarks and other resources.

Software Defined Data Infrastructures, SDDC, SDX and SDDI

What About Speeds and Feeds

Watch for and check out the various Intel partners who have or will be announcing their new server compute platforms based on Intel Xeon Scalable Processors. Each of the different vendors will have various speeds and feeds options that build on the fundamental Intel Xeon Scalable Processor capabilities.

For example Dell EMC announced their 14G server platforms at the May 2017 Dell EMC World event with details to follow (e.g. after the Intel announcements).

Some things to keep in mind include the amount of DDR4 DRAM (or Optane NVDIMM) will vary by vendors server platform configuration, motherboards, several sockets and DIMM slots. Also keep in mind the differences between registered (e.g. buffered RDIMM) that give good capacity and great performance, and load reduced DIMM (LRDIMM) that have great capacity and ok performance.

Various nvme options

What about NVMe

It’s there as these systems like previous Intel models support NVMe devices via PCIe 3.x slots, and some vendor solutions also supporting M.2 x4 physical interconnects as well.

server storageIO flash and SSD
Image via Software Defined Data Infrastructure Essentials (CRC)

Note that Broadcom formerly known as Avago and LSI recently announced PCIe based RAID and adapter cards that support NVMe attached devices in addition to SAS and SATA.

server storage data infrastructure sddi

What About Intel and Storage

In case you have not connected the dots yet, the Intel Xeon Scalable Processor based server (aka compute) systems are also a fundamental platform for storage systems, services, solutions, appliances along with tin-wrapped software.

What this means is that the Intel Xeon Scalable Processors based systems can be used for deploying legacy as well as new and emerging software-defined storage software solutions. This also means that the Intel platforms can be used to support SDDC, SDDI, SDX, SDI as well as other forms of legacy and software-defined data infrastructures along with cloud, virtual, container, server less among other modes of deployment.

Image Via Intel.com

Moving beyond server and compute platforms, there is another tie to storage as part of this recent as well as other Intel announcements. Just a few weeks ago Intel announced 64 layer triple level cell (TLC) 3D NAND solutions positioned for the client market (laptop, workstations, tablets, thin clients). Intel with that announcement increased the traditional aerial density (e.g. bits per square inch or cm) as well as boosting the number of layers (stacking more bits as well).

The net result is not only more bits per square inch, also more per cubic inch or cm. This is all part of a continued evolution of NAND flash including from 2D to 3D, MCL to TLC, 32 to 64 layer.  In other words, NAND flash-based Solid State Devices (SSDs) are very much still a relevant and continue to be enhanced technology even with the emerging 3D XPoint and Optane (also available via Amazon in M.2) in the wings.

server memory evolution
Via Intel and Micron (3D XPoint launch)

Keep in mind that NAND flash-based technologies were announced almost 20 years ago (1999), and are still evolving. 3D XPoint announced two years ago, along with other emerging storage class memories (SCM), non-volatile memory (NVM) and persistent memory (PM) devices are part of the future as is 3D NAND (among others). Speaking of 3D XPoint and Optane, Intel had announcements about that in the past as well.

Where To Learn More

Learn more about Intel Xeon Scalable Processors along with related technology, trends, tools, techniques and tips with the following links.

What This All Means

Some say the PC is dead and IMHO that depends on what you mean or define a PC as. For example if you refer to a PC generically to also include servers besides workstations or other devices, then they are alive. If however your view is that PCs are only workstations and client devices, then they are on the decline.

However if your view is that a PC is defined by the underlying processor such as Intel general purpose 64 bit x86 derivative (or descendent) then they are very much alive. Just as older generations of PCs leveraging general purpose Intel based x86 (and its predecessors) processors were deployed for many uses, so to are today’s line of Xeon (among others) processors.

Even with the increase of ARM, GPU and other specialized processors, as well as ASIC and FPGAs for offloads, the role of general purpose processors continues to increase, as does the technology evolution around. Even with so called server less architectures, they still need underlying compute server platforms for running software, which also includes software defined storage, software defined networks, SDDC, SDDI, SDX, IoT among others.

Overall this is a good set of announcements by Intel and what we can also expect to be a flood of enhancements from their partners who will use the new family of Intel Xeon Scalable Processors in their products to enable software defined data infrastructures (SDDI) and SDDC.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

GDPR goes into effect May 25 2018 Are You Ready?

server storage I/O trends

GDPR goes into effect May 25 2018 Are You Ready?

The new European General Data Protection Regulation (GDPR) go into effect in a year on May 25 2018 are you ready?

Why Become GDPR Aware

If your initial response is that you are not in Europe and do not need to be concerned about GDPR you might want to step back and review that thought. While it is possible that some organizations may not be affected by GDPR in Europe directly, there might be indirect considerations. For example, GDPR, while focused on Europe, has ties to other initiatives in place or being planned for elsewhere in the world. Likewise unlike earlier regulatory compliance that tended to focus on specific industries such as healthcare (HIPPA and HITECH) or financial (SARBOX, Dodd/Frank among others), these new regulations can be more far-reaching.

GDPR Looking Beyond Compliance

Taking a step back, GDPR, as its name implies, is about general data protection including how information is protected, preserved, secured and served. This also includes taking safeguards to logically protect data with passwords, encryption among other techniques. Another dimension of GDPR is reporting and ability to track who has accessed what information (including when), as well as simply knowing what data you have.

What this means is that GDPR impacts users from consumers of social media such as Facebook, Instagram, Twitter, Linkedin among others, to cloud storage and related services, as well as traditional applications. In other words, GDPR is not just for finance, healthcare, it is more far-reaching making sure you know what data exists, and taking adequate steps to protect.

There is a lot more to discuss of GDPR in Europe as well as what else is being done in other parts of the world. For now being aware of initiatives such as GDPR and its broader scope impact besides traditional compliance is important. With these new initiatives, the focus expands from the compliance office or officers to the data protection office and data protection officer whose scope is to protect, preserve, secure and serve data along with associated information.

GDPR and Microsoft Environments

As part of generating awareness and help planning, I’m going to be presenting a free webinar produced by Redmond Magazine sponsored by Quest (who will also be a co-presenter) on June 22, 2017 (7AM PT). The title of the webinar is GDPR Compliance Planning for Microsoft Environments.

This webinar looks at the General Data Protection Regulation (GDPR) and its impact on Microsoft environments. Specifically, we look at how GDPR along with other future compliance directives impact Microsoft cloud, on-premises, and hybrid environments, as well as what you can do to be ready before the May 25, 2018 deadline. Join us for this discussion of what you need to know to plan and carry out a strategy to help address GDPR compliance regulations for Microsoft environments.

What you will learn during this discussion:

  • Why GDPR and other regulations impact your environment
  • How to assess and find compliance risks
  • How to discover who has access to sensitive resources
  • Importance of real-time auditing to monitor and alert on user access activity

This webinar applies to business professionals responsible for strategy, planning and policy decision-making for Microsoft environments along with associated applications. This includes security, compliance, data protection, system admins, architects and other IT professionals.

What This All Means

Now is the time to start planning, preparing for GDPR if you have not done so and need to, as well as becoming more generally aware of it and other initiatives. One of the key takeaways is that while the word compliance is involved, there is much more to GDPR than just compliance as we have seen in the part. With GDPR and other initiatives data protection becomes the focus including privacy, protect, preserve, secure, serve as well as manage, have insight, awareness along with associated reporting. Join me and Quest on June 22, 2017 7AM PT for the webinar GDPR Compliance Planning for Microsoft Environments to learn more.

Ok, nuff said, for now.

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Preparing For World Backup Day 2017 Are You Prepared

Preparing For World Backup Day 2017 Are You Prepared

In case you have forgotten, or were not aware, this coming Friday March 31 is World Backup Day 2017 (and recovery day). The annual day is a to remember to make sure you are protecting your applications, data, information, configuration settings as well as data infrastructures. While the emphasis is on Backup, that also means recovery as well as testing to make sure everything is working properly as part of on-prem and cloud data protection.

What the Vendors Have To Say

Today I received the following from Kylle over at TOUCHDOWNPR on behalf of their clients providing their perspectives on what World Backup Day means, or how to be prepared. Keep in mind these are not Server StorageIO clients (granted some have been in the past, or I know them, that is a disclosure btw), and this is in no way an endorsement of what they are saying, or advocating. Instead, this is simply passing along to you what was given to me.

Not included in this list? No worries, add your perspectives (politely) to the comments, or, drop me a note, and perhaps I will do a follow-up or addition to this.

Kylle O’Sullivan
TOUCHDOWNPR
Email: Kosullivan@touchdownpr.com
Mobile: 508-826-4482
Skype: Kylle.OSullivan

“Data loss and disruption happens far too often in the enterprise. Research by Ponemon in 2016 estimates the average cost of an unplanned outage has spiralled to nearly $9,000 a minute, causing crippling downtime as well as financial and reputational damage. Legacy backups simply aren’t equipped to provide seamless operations, with zero Recovery Point Objectives (RPO) should a disaster strike. In order to guarantee the availability of applications, synchronous replication with real-time analytics needs to be simple to setup, monitor and manage for application owners and economical to the organization. That way, making zero data loss attainable suddenly becomes a reality.” – Chuck Dubuque, VP Product Marketing, Tintri

“With today’s “always-on” business environment, data loss can destroy a company’s brand and customer trust. A multiple software-based strategy with software-defined and hyperconverged storage infrastructure is the most effective route for a flexible backup plan.  With this tactic, snapshots, replication and stretched clusters can help protect data, whether in a local data center cluster, across data centers or across the cloud. IT teams rely on these software-based policies as the backbone of their disaster recovery implementations as the human element is removed. This is possible as the software-based strategy dictates that all virtual machines are accurately, automatically and consistently replicated to the DR sites. Through this automatic and transparent approach, no administrator action is required, saving employees time, money and providing peace of mind that business can carry on despite any outage.” – Patrick Brennan, Senior Product Marketing Manager, Atlantis Computing

“It’s only a matter of time before your datacenter experiences a significant outage, if it hasn’t already, due to a wide range of causes, from something as simple as human error or power failure to criminal activity like ransomware and cyberattacks, or even more catastrophic events like hurricanes. Shifting thinking to ‘when’ as opposed to ‘if’ something like this happens is crucial; crucial to building a more flexible and resilient IT infrastructure that can withstand any kind of disruption resulting in negative impact on business performance. World Backup Day reminds us of the importance of both having a backup plan in place and as well as conducting regular reviews of current and new technology to do everything possible to keep business running without interruption. Organizations today are highly aware that they are heavily dependent on data and critical applications, and that losing even just an hour of data can greatly harm revenues and brand reputation, sometimes beyond repair. Savvy businesses are taking an all-inclusive approach to this problem that incorporates cloud-based technologies into their disaster recovery plans. And with consistent testing and automation, they are ensuring that those plans are extremely simple to execute against in even the most challenging of situations, a key element of successfully avoiding damaging downtime.” Rob Strechay, VP Product, Zerto

“Data is one of the most valuable business assets and when it comes to data protection chief among its IT challenges is the ever-growing rate of data and the associated vulnerability. Backup needs to be reliable, fast and cost efficient. Organizations are on the defensive after a disaster and being able to recover critical data within minutes is crucial. Breakthroughs in disk technologies and pricing have led to very dense arrays that are power, cost and performance efficient. Backup has been revolutionized and organizations need to ensure they are safeguarding their most valuable commodity – not just now but for the long term. Secure archive platforms are complementary and create a complete recovery strategy.”  – Geoff Barrall, COO, Nexsan

Consider the DR Options that Object Storage Adds
“Data backup and disaster recovery used to be treated as separate processes, which added complexity. But with object storage as a backup target you now have multiple options to bring backup and DR together in a single flow. You can configure a hybrid cloud and tier a portion of your data to the public cloud, or you can locate object storage nodes at different locations and use replication to provide geographic separation. So, this World Backup Day, consider how object storage has increased your options for meeting this critical need.” – Jon Toor, Cloudian CMO

Whats In Your Data Protection Toolbox

What tools, technologies do you have in your data protection toolbox? Do you only have a hammer and thus answer to every situation is that it looks like a nail? Or, do you have multiple tools, technologies combined with your various tradecraft experiences to applice different techniques?

storageio data protection toolbox

Where To Learn More

Following these links to additional related material about backup, restore, availability, data protection, BC, BR, DR along with associated topics, trends, tools, technologies as well as techniques.

Time to restore from backup: Do you know where your data is?
February 2017 Server StorageIO Update Newsletter
Data Infrastructure Server Storage I/O Tradecraft Trends
Data Infrastructure Server Storage I/O related Tradecraft Overview
Data Infrastructure Primer and Overview (Its Whats Inside The Data Center)
What’s a data infrastructure?
Ensure your data infrastructure remains available and resilient
Part III Until the focus expands to data protection – Taking action
Welcome to the Data Protection Diaries
Backup, Big data, Big Data Protection, CMG & More with Tom Becchetti Podcast
Six plus data center software defined management dashboards
Cloud Storage Concerns, Considerations and Trends
Software Defined, Cloud, Bulk and Object Storage Fundamentals (www.objectstoragecenter.com)

Data Infrastructure Overview, Its Whats Inside of Data Centers
All You Need To Know about Remote Office/Branch Office Data Protection Backup (free webinar with registration)
Software Defined, Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI) resources
The SSD Place (SSD, NVM, PM, SCM, Flash, NVMe, 3D XPoint, MRAM and related topics)
The NVMe Place (NVMe related topics, trends, tools, technologies, tip resources)
Data Protection Diaries (Archive, Backup/Restore, BC, BR, DR, HA, RAID/EC/LRC, Replication, Security)
Software Defined Data Infrastructure Essentials (CRC Press 2017) including SDDC, Cloud, Container and more
Various Data Infrastructure related events, webinars and other activities

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Backup of data is important, so to is recovery which also means testing. Testing means more than just if you can read the tape, disk, SSD, USB, cloud or other medium (or location). Go a step further and verify that not only you can read the data from the medium, also if your applications or software are able to use it. Have you protected your applications (e.g. not just the data), security keys, encryption, access, dedupe and other certificates along with metadata as well as other settings? Do you have a backup or protection copy of your protection including recovery tools? What granularity of protection and recovery do you have in place, when did you test or try it recently? In other words, what this all means is be prepared, find and fix issues, as well as in the course of testing, don’t cause a disaster.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Six plus data center software defined management dashboards tools

Software defined data infrastructure management insight tools

server storage I/O trends

Updated 1/17/2018

Managing data infrastructures involves using software defined management dashboards tools. Recently I found in my inbox a link to a piece 6 Dashboards for Managing Every Modern Data Center that caught my attention. I was hoping to see who the six different datacenter technologies, dashboard solutions tools were instead of finding list of dashboard considerations for modern data centers and data infrastructures.

Turns out the piece was nothing more than a list of six items featured as part of the vendors (Sunbird) piece about what to look for in a dashboard (e.g. their product). Sure there were some of the usual key performance indicator (KPI) associated with or related to IT Service Management (ITSM), Data Center Infrastructure (Insight/Information) Management (DCIM), Configuration and Change management databases (CMDB), availability, capacity and Performance Management Databases (PMDB) among others.

  • Space
  • Inventory
  • Connectivity
  • Change
  • Environment
  • Power

Dashboard Discussions

Keep in mind however that there are many different types of dashboards (and consoles), some are active along with analytics including correlation, others are passive simply displaying. The focus area also various from physical data center facilities, to applications, to data infrastructures or components such as servers, storage, I/O networks, clouds, virtual, containers among others modern data centers.

Data Infrastructures and SDDI, SDDC, SDI
Data Infrastructures (hardware, software, services, servers, storage, I/O and networks)

This is where some context comes into play as there are different types of dashboards for various audience, technology and focus areas (e.g. domains) across data infrastructure (and other entities). For example do a google search of “dashboard” and see what appears, or “IT dashboard”, “data center dashboard” vs. “datacenter dashboard” among others.

Additional KPIs include:

  • Performance, availability, Capacity and Economic (PACE) attributes
  • Service Level Objectives (SLO), Service Level Agreements (SLAs)
  • Recovery Time Objectives (RTO), Recovery Point Objectives (SLO)
  • IT Service Management (ITSM) and Data Center Infrastructure Management (DCIM)
  • Configuration and Change Management (e.g. things part of CMDB)
  • Performance, availability and capacity (e.g. things part of PMDB)
  • Various focus and layers, cross domain functionality views
  • Costs management including subscriptions, licenses and others

IT Data Center and Data Infrastructure Dashboard Options

For those of you who have made it this far, while not a comprehensive list, the following are some examples of vendors, services or solutions that either are, or have an association with data center, as well as data infrastructure management. Some dashboards or tools are homogenous in that they only work within a given area of focus such as particular cloud, service provider, vendor or solution set. Others are heterogeneous or federated working across different services, solutions, vendors and domain focus areas. Think of these as software defined management (SDM), or, software defined data infrastructure (SDDI) management, software defined data center (SDDC) management among other variations for the modern information factory.

There is a mix of tools that run on site (e.g. on premise) or via cloud services (e.g. manager your on site from the cloud). Likewise, some are for fee, others subscription and some are open source. In addition some of the tools are turnkey while others are do it yourself (DiY) or allow you to customize. Also keep in mind that depending on what your tradecraft (skills, experience, expertise) interest area is, these may or may not be applicable to you, while relevant to others. For example some such as Spiceworks tend to be more helpdesk focused while others on other data center or data infrastructure areas.

There are dashboards for or from AWS, Canonical (Ubuntu), Dell including EMC, Google, HPE, IBM, Microsoft System Center and Azure, NetApp, OpenStack, Oracle, Rackspace, Redhat, Rightscale, Servicenow, Softlayer, Suse and VMware among others.

Blue Medora (various data infrastructure monitoring)
Cloudkitty (open source cloud rating and chargeback)
Collectd (data infrastructure collection and monitoring)
cPanel and whm (web and hosting dashboards)
data infrastructure sddi cpanel

Dashbuilder (customize your dashboard)
Datadog (super easy to get access, download, install, configure and use)
Domo (various data infrastructure monitoring tools)
Extrahop (still waiting to be able to download and try their bits vs. watching a demo)
Firescope (data infrastructure insight and awareness)
Freezer (open source dashboard tools)
Komprise (interesting solution, would like try, however lots of gated material)
Nagios (data infrastructure monitoring)
Openit (data infrastructure tracking, report, monitoring)
Opvizor (data infrastructure monitoring and reporting)

storageio datadog dashboard

Panorama9 (various data infrastructure monitoring and reporting)
Quest (various tools)
Redhat Cloudforms (openstack and cloud management)
Rrdtools (data collection, logging and display)
Sisense (insight and awareness tools)
Solarwinds Server Application Monitor (SAM) among other tools
Teamquest (various monitoring, management, capacity planning tools)
Turbomomic (software defined data infrastructure insight tools)
Virtual Instruments (various monitoring and insight awareness along with analytics)

In addition to the above, there are tools such as Splunk among others that also provide insight and awareness to help avoid flying blind while managing your data center or data infrastructure.

Where to learn more

Learn more via the following links.

  • Data Infrastructure Primer and Overview (Its Whats Inside The Data Center)
  • E2E Awareness and insight for IT environments
  • Server and Storage I/O Benchmarking and Performance Resources
  • Data Center Infrastructure Management (DCIM) and IRM
  • The Value of Infrastructure Insight – Enabling Informed Decision Making
  • More storage and IO metrics that matter
  • Whats a data infrastructure?
  • Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What this all means

    Without insight and awareness you are flying blind, how can you make informed decisions about your information factory, data infrastructures, data center along with applications. There are different focus areas for various audiences up and down the stack layers in data infrastructures and data centers. Key is having insight and awareness including knowing what are some different tool options.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Some popular 2016 storageioblog posts

    Some popular 2016 storageioblog posts

    server storage I/O trends

    Big Files and Lots of Little File Processing and Benchmarking with Vdbench – Need to test, validate, compare, contrast or simply apply workload to file systems, NAS or other file-based access? Want the flexibility and simplicity to software define your benchmark workload to meet various needs? For example, millions of small files or thousands of large 5GB, 10GB, 15GB (or larger) files with various read, write size and access patterns spanning a single directory, or many with various depths? Do you want the flexibility for different platforms including Windows, *NIX, bare metal, container, virtual or cloud without a bulk tool using simple scripts that produce lots of insightful results? Then you will want to check this post out.

    Breaking the VMware ESXi 5.5 ACPI boot loop on Lenovo TD350 – Ever have a VMware host server go into a boot loop and purple screen of death (PSD) then displaying a message about ACPI or similar? After spending time searching and applying many filters to sift through the noise of false positive matches, finally found the simple fix (e.g. a BIOS setting) to break the VMware ESXi vSphere boot loop, or at least on a Lenovo server.

    Cloud and Object Storage

    Cloud conversations: AWS EBS, Glacier and S3 overview (Part I) – This is one of the perennial favorites that while new features have been added with others extended, the post series still provides a good overview, primer or refresher of various Amazon Web Services (AWS) services including how they work. Interesting in learning more about Microsoft and Azure, then check out this, this, this and this.

    Cloud Conversations: AWS EFS Elastic File System (Cloud NAS) – This is a companion to the above AWS as well as other cloud post series that looks at AWS Elastic File System. Note that other cloud service providers have also added NAS file access support, some are intra (e.g. inside AWS cloud), others are inter-cloud (e.g. inside and outside cloud) such as Azure (can work with external Windows Servers using SMB3). Even OpenStack has added NAS file with Manila folders and Ceph with CephFS among others. So when some people tell you that NAS and file access are dead particular for cloud, remind them of the increasing number of services and software stacks that are adding new services to allow their solution to be compatible with existing environments or applications.

    Server Storage I/O performance

    Collecting Transaction Per Minute from SQL Server and HammerDB – If you have used the free tool HammerDB (e.g. Hammora) for driving database workloads, simulations or benchmarks you should recall that the resulting statistics are rather lacking. Sure there is a nice GUI chart that shows current executing transactions per second (TPS) along with some very simple counters in the log. However compared to some other tools such as sysbench, Quest Benchmark Factory and YCSB among others, the Hammer metrics are rather lacking. In this post I show how you can collect some more metrics from SQL Server if you have to use HammerDB. View more server storage I/O performance benchmark and monitoring tools resources here.

    Windows Server 2016

    Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 – Microsoft released into general availability Windows Server 2016 and this post looks at some of the new features along with functionality including Storage Spaces Direct (S2D), Storage Replica (SR) as well as other enhancements. With these new and enhanced features Windows Servers increase their interoperability with Azure, as well as supporting aggregated hyper-converged infrastructure (HCI), disaggregated converged (CI) as well as traditional workloads along with Hyper-V (and containers). One of the other new enhancements in Windows Server 2016 which now uses ReFS (Reliable File System) as its default file system that you can read more about here. RIP Windows SIS (Single Instance Storage), or at least in Server 2016 With Windows Server 2016 Microsoft removed single instance storage replacing with new capabilities that you can read more about in the this post.

    Garbage data in garbage data out

    Garbage data in, garbage information out, big data or big garbage? There is a classic IT expression of garbage data in results in garbage data (or information out) in that your algorithms and data structures (which equals programs e.g. Niklaus Wirth) are only as good as the data they work on. What this means then is that if there is a large amount of big data then there can also be a big garbage in and garbage out problem unless addressed.

    Hard product vs. soft product – Hard product refers to something such as hardware, software or a service resource that is obtained and then joined with other resources in a particular way to create a soft product. Not to be confused with software, the soft product is the result or how resources get defined that give some ability or benefit. Think of a soft product as for how airlines can use the same airplane, serve the same coca cola, have same seats, yet their soft product is the service experience of how those are delivered, as well as how you find and buy or use them. Another way of thinking about it is hard products are the ingredients for a recipe, the recipe defines how those ingredients result in some food dish.

    how many IOPs can an HDD or SSD do

    Part II: How many IOPS can a HDD, HHDD or SSD do with VMware? – This is part of a multi-post series looking at how many IOPs (or bandwidth) various HDD and SSDs can do handling different workloads. Of course, your results will vary with configuration settings, tools among other considerations. However, some of the older rules of thumb (RUT) about RPM and other considerations for HDDs have changed and continue to do so. As an example of how HDDs continue to evolve check out this popular post from the 2016 list Which Enterprise HDDs to use for a Content Server Platform.

    Part II: What I did with Lenovo TS140 in my Server and Storage I/O Review – This is a popular post series of some things I have done with a Lenovo TS140 including defining with various software as well as hardware. This is a great price performer value system that several years ago after testing one Lenovo sent me, I returned that to Lenovo and bought several of them to join my other systems.

    Server and Storage I/O Benchmarking and Performance Resources – This is a collection of various server, storage I/O and networking hardware, software as well as services tools, techniques as well as tips for benchmarking, comparing, simulation, testing, gaining insight across cloud, virtual, container and legacy resources. Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I) – This is one of the tools found on the server, storage I/O benchmarking and performance resources page. Diskspd is a tool developed by Microsoft as an alternative to using Iometer, vdbench, fio.exe, SQLIO among many others, plus, it is on github.

    server storage I/O nvme and ssd

    The NVM (Non Volatile Memory) and NVMe Place – Interesting and adoption in nand flash, nvram, 3D XPoint among other SSD and Non-volatile Memory (NVM) continues. Another popular post that you can find at thenvmeplace.com is this NVMe overview and primer – Part I. There is a growing interest, awareness and deployment adoption around NVM Express (NVMe) the new protocol for accessing NVMs and SSDs. Some of the common conversations and questions I encounter is confusion between NVM and NVMe, too which the answer is one (the former) are the media or devices, the other is the access method alternative to using AHCI/SATA or SCSI (e.g. SAS, iSCSI, FCP, SRP) among others.

    VMware VVOLs and storage I/O fundamentals (Part 1) – VMware Virtual Volumes (VVOL) continue to gain adoption and this post is part of an overview and primer. If you want to go deeper into VVOL as well as see some adoption insights check out Eric Sieberts post here over at vsphere-land.com

    Welcome to the Object Storage Center page – This is a micro site that has a primer and overview of cloud as well as object storage along with an expanding list of links to various resources, tips, technologies, tools, trends and industry activity.

    Where To Learn More

    www.storageio.com particular if you have not been there for awhile to check out the new streamlined look and navigation to various content including Server StorageIO update newsletters (free subscription) among other resources.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What this all means and wrapping up

    Some of the popular posts for 2016 are perennial favorites and based on experience will probably appear on the 2017 list. However there are also several new posts that appeared in 2016 that I suspect will also appear on the 2017 version of the above list, along with new content from 2017.

    Thank you to all of you who frequent StorageIOblog.com as well as StorageIO.com along with our various micro sites including server storage I/O performance and benchmarking resources, thenvmeplace.com, thessdplace.com, cloud and objectstoragecenter.com, data protection diaries among others.

    Also thank you for viewing various partner venues and syndicates with extra ones appearing throughout 2017. Watch for more content in the coming weeks, months and throughout 2017 on software defined data infrastructures (SDDI) along with server, storage I/O, networking, hardware, software, cloud, container, data protection and related topics, trends, technologies, tools and tips.

    Again, thank you

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    The Value of Infrastructure Insight – Enabling Informed Decision Making

    The Value of Infrastructure Insight – Enabling Informed Decision Making

    server storage I/O trends

    Join me and Virtual Instruments CTO John Gentry on October 27, 2016 for a free webinar (registration required) titled The Value of Infrastructure Insight – Enabling Informed Decision Making with Virtual Instruments. In this webinar, John and me will discuss the value of data center infrastructure insight both as a technology as well as a business and IT imperative.

    Software Defined Data Infrastructure
    Various Infrastructures – Business, Information, Data and Physical (or cloud)

    Leveraging infrastructure performance analytics is key to assuring the performance, availability and cost-effectiveness of your infrastructure, especially as you transform to a hybrid data center over the coming years. By utilizing real-time and historical infrastructure insight from your servers, storage and networking, you can avoid flying blind and give situational awareness for proactive decision-making. The result is faster problem resolution, problem avoidance, higher utilization and the elimination of performance slowdowns and outages.

    View the companion Server StorageIO Industry Trends Report available here (free, no registration required) at the Virtual Instruments web page resource center.

    xxxx

    The above Server StorageIO Industry Trends Perspective Report (click here to download PDF) looks at the value of data center infrastructure insight both as a technology as well as a business productivity enabler. Besides productivity, having insight into how data infrastructure resources (servers, storage, networks, system software) are used, enables informed analysis, troubleshooting, planning, forecasting as well as cost-effective decision-making.

    In other words, data center infrastructure insight, based on infrastructure performance analytics, enables you to avoid flying blind, having situational awareness for proactive Information Technology (IT) management. Your return on innovation is increased, and leveraging insight awareness along with metrics that matter drives return on investment (ROI) along with enhanced service delivery.

    Where To Learn More

    • Free Server StorageIO Industry Trends Report The Value of Infrastructure Insight – Enabling Informed Decision Making (PDF)
    • Register for the free webinar on October 27, 2016 1PM ET here.
    • View other upcoming and recent events at the Server StorageIO activities page here.

    What This All Means

    What this all means is that the key to making smart, informed decisions involving data infrastructure, servers, storage, I/O across different applications is having insight and awareness. See for yourself how you can gain insight into your existing information factory environment performing analysis, as well as comparing and simulating your application workloads for informed decision-making.

    Having insight and awareness (e.g. instruments) allows you to avoid flying blind, enabling smart, safe and informed decisions in different conditions impacting your data infrastructure. How is your investment in hardware, software, services and tools being leveraged to meet given levels of services? Is your information factory (data center and data infrastructure) performing at its peak effectiveness?

    How are you positioned to support growth, improve productivity, remove complexity and costs while evolving from a legacy to a next generation software-defined, cloud, virtual, converged or hyper-converged environment with new application needs?

    Data infrastructure insight benefits and takeaways:

    • Informed performance-related decision-making
    • Support growth, agility, flexibility and availability
    • Maximize resource investment and utilization
    • Find, fix and remove I/O bottlenecks
    • Puts you in control in the driver’s seat

    Remember to register and attend the October 27 webinar that you can register here.

    Btw, Virtual Instruments has been a client of Server StorageIO and that fwiw is a disclosure.

    Ok, nuff said, for now…

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Are large storage arrays dead at the hands of SSD?

    Storage I/O trends

    An industry trends and perspective.

    .

    Are large storage arrays dead at the hands of SSD? Short answer NO not yet.
    There is still a place for traditional storage arrays or appliances particular those with extensive features, functionality and reliability availability serviceability (RAS). In other words, there is still a place for large (and small) storage arrays or appliances including those with SSDs.

    Is there a place for newer flash SSD storage systems, appliances and architectures? Yes
    Similar to how there is a place for traditional midrange storage arrays or appliances have found their roles vs. traditional higher end so-called enterprise arrays. Think as an example  EMC CLARiiON/VNX or HP EVA/P6000 or HDS AMS/HUS or NetApp FAS or IBM DS5000 or IBM V7000 among others vs. EMC Symmetrix/DMX/VMAX or HP P10000/3Par or HDS VSP/USP or IBM DS8000. In addition to traditional enterprise or high-end storage systems and midrange also known as modular, there are also specialized appliances or targets such as for backup/restore and archiving. Also do not forget the IO performance SSD appliances like those from TMS among others that have been around for a while.

    Is the role of large storage systems changing or evolving? Yes
    Given their scale and ability to do large amounts of work in a dense footprint, for some the role of these systems is still mission critical tier 1 application and data support. For other environments, their role continues to evolve being used for high-density tier 2 bulk or even near-line storage for on-line access at scale.

    Storage I/O trends

    Does this mean there is completion between the old and new systems? Yes
    In some circumstances as we have seen already with SSD solutions. Some will place as competing or replacements while others as complementing. For example in the PCIe flash SSD card segment EMC VFCache is positioned is complementing Dell, EMC, HDS, HP, IBM, NetApp, Oracle or others storage vs. FusionIO who positions as a replacement for the above and others. Another scenario is how some SSD vendors have and continue to position their all-flash SSD arrays using either drives or PCIe cards to complement and coexist with other storage systems in an environment (e.g. data center level tiering) vs. as a replacement. Also keep in mind SSD solutions that also support a mix of flash devices and traditional HDDs for capacity and cost savings or cloud access in the same solution.

    Does this mean that the industry has adopted all SSD appliances as the state of art?
    Avoid confusing industry adoption or talk with industry and customer deployment. They are similar, however one is focused on what the industry talks about or discusses as state of art or the future while the other is what customers are doing. Certainly some of the new flash SSD appliance and storage startups such as Solidfire, Nexgen, Violin, Whiptail or veteran TMS among others have promising futures, some of which may actually be in play with the current SSD market shakeout and consolidation.

    Does that mean everybody is going SSD?
    SSD customer adoption and deployment continues to grow, however so too does the deployment of high-capacity HDDs.

    Storage I/O trends

    Do SSDs need HDDs, do HDDs need SSDs? Yes
    Granted there are environments where needs can be addressed by all of one or the other. However at least near term, there is a very strong market for tiering and mix of SSD, some fast HDDs and lots of high-capacity HDDs to meet various needs including performance, availability, capacity, energy and economics. After all, there is no such thing, as a data or information recession yet budgets are tight or being reduced. Likewise, people and data are living longer.

    What does this mean?
    If there, were no such thing as a data recession and budgets a non-issue, perhaps everything could move to all flash SSD storage systems. However, we also know that people and data are living longer along with changing data life-cycle patterns. There is also the need for performance to close the traditional data center IO performance to space capacity gap and bottlenecks as well as store and keep data longer.

    There will continue to be a need for a mix of high-capacity and high performance. More IO will continue to gravitate towards the IO appliances, however more data will settle in for longer-term retention and continued access as data life-cycle continue to evolve. Watch for more SSD and cache in the large systems, along with higher density SAS-NL (SAS Near Line e.g. high capacity) type drives appearing in those systems.

    If you like new shiny new toys or technology (SNTs) to buy, sell or talk about, there will be plenty of those to continue industry adoption while for those who are focused on industry deployment, there will be a mix of new, and continued evolution for implementation.

    Related links
    Industry adoption vs. industry deployment, is there a difference?

    Industry trend: People plus data are aging and living longer

    No Such Thing as an Information Recession

    Changing Lifecycles & Data Footprint Reduction
    What is the best kind of IO? The one you do not have to do
    Is SSD dead? No, however some vendors might be
    Speaking of speeding up business with SSD storage
    Are Hard Disk Drives (HDD’s) getting too big?
    IT and storage economics 101, supply and demand
    Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?
    Why SSD based arrays and storage appliances can be a good idea (Part I)
    Researchers and marketers don’t agree on future of nand flash SSD
    EMC VFCache respinning SSD and intelligent caching (Part I)
    SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD

    Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved