Chelsio Storage over IP and other Networks Enable Data Infrastructures

Chelsio Storage over IP Enable Data Infrastructures

server storage I/O data infrastructure trends

Chelsio and Storage over IP (SoIP) continue to enable Data Infrastructures from legacy to software defined virtual, container, cloud as well as converged. This past week I had a chance to visit with Chelsio to discuss data infrastructures, server storage I/O networking along with other related topics. More on Chelsio later in this post, however, for now lets take a quick step back and refresh what is SoIP (Storage over IP) along with Storage over Ethernet (among other networks).

Data Infrastructures Protect Preserve Secure and Serve Information
Various IT and Cloud Infrastructure Layers including Data Infrastructures

Server Storage over IP Revisited

There are many variations of SoIP from network attached storage (NAS) file based processing including NFS, SAMBA/SMB (aka Windows File sharing) among others. In addition there is various block such as SCSI over IP (e.g. iSCSI), along with object via HTTP/HTTPS, not to mention the buzzword bingo list of RoCE, iSER, iWARP, RDMA, DDPK, FTP, FCoE, IFCP, and SMB3 direct to name a few.

Who is Chelsio

For those who are not aware or need a refresher, Chelsio is involved with enabling server storage I/O by creating ASICs (Application Specific Integrated Circuits) that do various functions offloading those from the host server processor. What this means for some is a throw back to the early 2000s of the TCP Offload Engine (TOE) era where various processing to handle regular along with iSCSI and other storage over Ethernet and IP could be accelerated.

Chelsio data infrastructure focus

Chelsio ecosystem across different data infrastructure focus areas and application workloads

As seen in the image above, certainly there is a server and storage I/O network play with Chelsio, along with traffic management, packet inspection, security (encryption, SSL and other offload), traditional, commercial, web, high performance compute (HPC) along with high profit or productivity compute (the other HPC). Chelsio also enables data infrastructures that are part of physical bare metal (BM), software defined virtual, container, cloud, serverless among others.

Chelsio server storage I/O focus

The above image shows how Chelsio enables initiators on server and storage appliances as well as targets via various storage over IP (or Ethernet) protocols.

Chelsio enabling various data center resources

Chelsio also plays in several different sectors from *NIX to Windows, Cloud to Containers, Various processor architectures and hypervisors.

Chelsio ecosystem

Besides diverse server storage I/O enabling capabilities across various data infrastructure environments, what caught my eye with Chelsio is how far they, and storage over IP have progressed over the past decade (or more). Granted there are faster underlying networks today, however the offload and specialized chip sets (e.g. ASICs) have also progressed as seen in the above and next series of images via Chelsio.

The above showing TCP and UDP acceleration, the following show Microsoft SMB 3.1.1 performance something important for doing Storage Spaces Direct (S2D) and Windows-based Converged Infrastructure (CI) along with Hyper Converged Infrastructures (HCI) deployments.

Chelsio software environments

Something else that caught my eye was iSCSI performance which in the following shows 4 initiators accessing a single target doing about 4 million IOPs (reads and writes), various size and configurations. Granted that is with a 100Gb network interface, however it also shows that potential bottlenecks are removed enabling that faster network to be more effectively used.

Chelsio server storage I/O performance

Moving on from TCP, UDP and iSCSI, NVMe and in particular NVMe over Fabric (NVMeoF) have become popular industry topics so check out the following. One of my comments to Chelsio is to add host or server CPU usage to the following chart to help show the story and value proposition of NVMe in general to do more I/O activity while consuming less server-side resources. Lets see what they put out in the future.

Chelsio

Ok, so Chelsio does storage over IP, storage over Ethernet and other interfaces accelerating performance, as well as regular TCP and UDP activity. One of the other benefits of what Chelsio and others are doing with their ASICs (or FPGA by some) is to also offload processing for security among other topics. Given the increased focus around server storage I/O and data infrastructure security from encryption to SSL and related usage that requires more resources, these new ASIC such as from Chelsio help to offload various specialized processing from the server.

The customer benefit is that more productive application work can be done by their servers (or storage appliances). For example, if you have a database server, that means more product ivy data base transactions per second per licensed software. Put another way, want to get more value out of your Oracle, Microsoft or other vendors software licenses, simple, get more work done per server that is licensed by offloading and eliminate waits or other bottlenecks.

Using offloads and removing server bottlenecks might seem like common sense however I’m still amazed that the number of organizations who are more focused on getting extra value out of their hardware vs. getting value out of their software licenses (which might be more expensive).

Chelsio

Where To Learn More

Learn more about related technology, trends, tools, techniques, and tips with the following links.

Data Infrastructures Protect Preserve Secure and Serve Information
Various IT and Cloud Infrastructure Layers including Data Infrastructures

What This All Means

Data Infrastructures exist to protect, preserve, secure and serve information along with the applications and data they depend on. With more data being created at a faster rate, along with the size of data becoming larger, increased application functionality to transform data into information means more demands on data infrastructures and their underlying resources.

This means more server I/O to storage system and other servers, along with increased use of SoIP as well as storage over Ethernet and other interfaces including NVMe. Chelsio (and others) are addressing the various application and workload demands by enabling more robust, productive, effective and efficient data infrastructures.

Check out Chelsio and how they are enabling storage over IPO (SoIP) to enable Data Infrastructures from legacy to software defined virtual, container, cloud as well as converged, oh, and thanks Chelsio for being able to use the above images.

Ok, nuff said, for now.
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Microsoft Azure Software Defined Data Infrastructure Reference Resources

Azure Software Defined Data Infrastructure Architecture Resources

Need to learn more about Microsoft Azure Cloud Software Defined Data Infrastructure topics including reference architecture among other resources for various application workloads?

Microsoft Azure has an architecture and resources page (here) that includes various application workload reference tools.

Microsoft Azure Software Defined Cloud
Azure Reference Architectures via Microsoft Azure

Examples of some Azure Reference Architecture for various application and workloads include among others:

For example, need to know how to configure a high availability (HA) Sharepoint deployment with Azure, then check out this reference architecture shown below.

Microsoft Azure Sharepoint HA reference architecture
Sharepoint HA via Microsoft Azure

Where To Learn More

Learn more about related technology, trends, tools, techniques, and tips with the following links.

Data Infrastructures Protect Preserve Secure and Serve Information
Various IT and Cloud Infrastructure Layers including Data Infrastructures

What This All Means

Data Infrastructures exist to protect, preserve, secure and serve information along with the applications and data they depend on. Software Defined Data Infrastructures span legacy, virtual, container, cloud and other environments to support various application workloads. Check out the Microsoft Azure cloud reference architecture and resources mentioned above as well as the Azure Free trial and getting started site here.

Ok, nuff said, for now.
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Who Will Be At Top Of Storage World Next Decade?

Who Will Be At Top Of Storage World Next Decade?

server storage I/O data infrastructure trends

Data Storage regardless of if hardware, legacy, new, emerging, cloud service or various software defined storage (SDS) approaches are all fundamental resource components of data infrastructures along with compute server, I/O networking as well as management tools, techniques, processes and procedures.

fundamental Data Infrastructure resource components
Fundamental Data Infrastructure resources

Data infrastructures include legacy along with software defined data infrastructures (SDDI), along with software defined data centers (SDDC), cloud and other environments to support expanding workloads more efficiently as well as effectively (e.g. boosting productivity).

Data Infrastructures and workloads
Data Infrastructure and other IT Layers (stacks and altitude levels)

Various data infrastructures resource components spanning server, storage, I/O networks, tools along with hardware, software, services get defined as well as composed into solutions or services which may in turn be further aggregated into more extensive higher altitude offerings (e.g. further up the stack).

IT and Data Infrastructure Stack Layers
Various IT and Data Infrastructure Stack Layers (Altitude Levels)

Focus on Data Storage Present and Future Predictions

Drew Robb (@Robbdrew) has a good piece over at Enterprise Storage Forum looking at the past, present and future of who will rule the data storage world that includes several perspective predictions comments from myself as well as others. Some of the perspectives and predictions by others are more generic and technology trend and buzzword bingo focus which should not be a surprise. For example including the usual performance, Cloud and Object Storage, DPDK, RDMA/RoCE, Software-Defined, NVM/Flash/SSD, CI/HCI, NVMe among others.

Here are some excerpts from Drews piece along with my perspective and prediction comments of who may rule the data storage roost in a decade:

Amazon Web Services (AWS) – AWS includes cloud and object storage in the form of S3. However, there is more to storage than object and S3 with AWS also having Elastic File Services (EFS), Elastic Block Storage (EBS), database, message queue and on-instance storage, among others. for traditional, emerging and storage for the Internet of Things (IoT).

It is difficult to think of AWS not being a major player in a decade unless they totally screw up their execution in the future. Granted, some of their competitors might be working overtime putting pins and needles into Voodoo Dolls (perhaps bought via Amazon.com) while wishing for the demise of Amazon Web Services, just saying.

Voodoo Dolls via Amazon.com
Voodoo Dolls and image via Amazon.com

Of course, Amazon and AWS could follow the likes of Sears (e.g. some may remember their catalog) and ignore the future ending up on the where are they now list. While talking about Amazon and AWS, one will have to wonder where Wall Mart will end up in a decade with or without a cloud of their own?

Microsoft – With Windows, Hyper-V and Azure (including Azure Stack), if there is any company in the industry outside of AWS or VMware that has quietly expanded its reach and positioning into storage, it is Microsoft, said Schulz.

Microsoft IMHO has many offerings and capabilities across different dimensions as well as playing fields. There is the installed base of Windows Servers (and desktops) that have the ability to leverage Software Defined Storage including Storage Spaces Direct (S2D), ReFS, cache and tiering among other features. In some ways I’m surprised by the number of people in the industry who are not aware of Microsoft’s capabilities from S2D and the ability to configure CI as well as HCI (Hyper Converged Infrastructure) deployments, or of Hyper-V abilities, Azure Stack to Azure among others. On the other hand, I run into Microsoft people who are not aware of the full portfolio offerings or are just focused on Azure. Needless to say, there is a lot in the Microsoft storage related portfolio as well as bigger broader data infrastructure offerings.

NetApp – Schulz thinks NetApp has the staying power to stay among the leading lights of data storage. Assuming it remains as a freestanding company and does not get acquired, he said, NetApp has the potential of expanding its portfolio with some new acquisitions. “NetApp can continue their transformation from a company with a strong focus on selling one or two products to learning how to sell the complete portfolio with diversity,” said Schulz.

NetApp has been around and survived up to now including via various acquisitions, some of which have had mixed results vs. others. However assuming NetApp can continue to reinvent themselves, focusing on selling the entire solution portfolio vs. focus on specific products, along with good execution and some more acquisitions, they have the potential for being a top player through the next decade.

Dell EMC – Dell EMC is another stalwart Schulz thinks will manage to stay on top. “Given their size and focus, Dell EMC should continue to grow, assuming execution goes well,” he said.

There are some who I hear are or have predicted the demise of Dell EMC, granted some of those predicted the demise of Dell and or EMC years ago as well. Top companies can and have faded away over time, and while it is possible Dell EMC could be added to the where are they now list in the future, my bet is that at least while Michael Dell is still involved, they will be a top player through the next decade, unless they mess up on execution.

Cloud and software defined storage data infrastructure
Various Data Infrastructures and Resources involving Data Storage

Huawei – Huawei is one of the emerging giants from China that are steadily gobbling up market share. It is now a top provider in many categories of storage, and its rapid ascendancy is unlikely to stop anytime soon. “Keep an eye on Huawei, particularly outside of the U.S. where they are starting to hit their stride,” said Schulz.

In the US, you have to look or pay attention to see or hear what Huawei is doing involving data storage, however that is different in other parts of the world. For example, I see and hear more about them in Europe than in the US. Will Huawei do more in the US in the future? Good question, keep an eye on them.

VMware – A decade ago, Storage Networking World (SNW) was by far the biggest event in data storage. Everyone who was anyone attended this twice yearly event. And then suddenly, it lost its luster. A new forum known as VMworld had emerged and took precedence. That was just one of the indicators of the disruption caused by VMware. And Schulz expects the company to continue to be a major force in storage. “VMware will remain a dominant player, expanding its role with software-defined storage,” said Schulz.

VMware has a dominant role in data storage not just because of the relationship with Dell EMC, or because of VSAN which continues to gain in popularity, or the soon to be released VMware on AWS solution options among others. Sure all of those matters, however, keep in mind that VMware solutions also tie into and work with other legacies as well as software-defined storage solution, services as well as tools spanning block, file, object for virtual machines as well as containers.

"Someday soon, people are going to wake up like they did with VMware and AWS," said Schulz. "That’s when they will be asking ‘When did Microsoft get into storage like this in such a big way.’"

What the above means is that some environments may not be paying attention to what AWS, Microsoft, VMware among others are doing, perhaps discounting them as the old or existing while focusing on new, emerging what ever is trendy in the news this week. On the other hand, some environments may see the solution offerings from those mentioned as not relevant to their specific needs, or capable of scaling to their requirements.

Keep in mind that it was not that long ago, just a few years that VMware entered the market with what by today’s standard (e.g. VSAN and others) was a relatively small virtual storage appliance offering, not to mention many people discounted and ignored VMware as a practical storage solution provider. Things and technology change, not to mention there are different needs and solution requirements for various environments. While a solution may not be applicable today, give it some time, keep an eye on them to avoid being surprised asking the question, how and when did a particular vendor get into storage in such a big way.

Is Future Data Storage World All Cloud?

Perhaps someday everything involving data storage will be in or part of the cloud.

Does this mean everything is going to the cloud, or at least in the next ten years? IMHO the simple answer is no, even though I see more workloads, applications, and data residing in the cloud, there will also be an increase in hybrid deployments.

Note that those hybrids will span local and on-premises or on-site if you prefer, as well as across different clouds or service providers. Granted some environments are or will become all in on clouds, while others are or will become a hybrid or some variation. Also when it comes to clouds, do not be scared, be prepared. Also keep an eye on what is going on with containers, orchestration, management among other related areas involving persistent storage, a good example is Dell EMCcode RexRay among others.

Server Storage I/O resources
Various data storage focus areas along with data infrastructures.

What About Other Vendors, Solutions or Services?

In addition to those mentioned above, there are plenty of other existing, new and emerging vendors, solutions, and services to keep an eye on, look into, test and conduct a proof of concept (PoC) trial as part of being an informed data infrastructure and data storage shopper (or seller).

Keep in mind that component suppliers some of whom like Cisco also provides turnkey solutions that are also part of other vendors offerings (e.g. Dell EMC VxBlock, NetApp FlexPod among others), Broadcom (which includes Avago/LSI, Brocade Fibre Channel, among others), Intel (servers, I/O adapters, memory and SSDs), Mellanox, Micron, Samsung, Seagate and many others.

E8, Excelero, Elastifile (software defined storage), Enmotus (micro-tiering, read Server StorageIOlab report here), Everspin (persistent and storage class memories including NVDIMM), Hedvig (software defined storage), NooBaa, Nutanix, Pivot3, Rozo (software defined storage), WekaIO (scale out elastic software defined storage, read Server StorageIO report here).

Some other software defined management tools, services, solutions and components I’m keeping an eye on, exploring, digging deeper into (or plan to) include Blue Medora, Datadog, Dell EMCcode and RexRay docker container storage volume management, Google, HPE, IBM Bluemix Cloud aka IBM Softlayer, Kubernetes, Mangstor, OpenStack, Oracle, Retrospect, Rubrix, Quest, Starwind, Solarwinds, Storpool, Turbonomic, Virtuozzo (software defined storage) among many others

What about those not mentioned? Good question, some of those I have mentioned in earlier Server StorageIO Update newsletters, as well as many others mentioned in my new book "Software Defined Data Infrastructure Essentials" (CRC Press). Then there are those that once I hear something interesting from on a regular basis will get more frequent mentions as well. Of course, there is also a list to be done someday that is basically where are they now, e.g. those that have disappeared, or never lived up to their full hype and marketing (or technology) promises, let’s leave that for another day.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Where To Learn More

Learn more about related technology, trends, tools, techniques, and tips with the following links.

Data Infrastructures and workloads
Data Infrastructures Resources (Servers, Storage, I/O Networks) enabling various services

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

It is safe to say that each new year will bring new trends, techniques, technologies, tools, features, functionality as well as solutions involving data storage as well as data infrastructures. This means a usual safe bet is to say that the current year is the most exciting and has the most new things than in the past when it comes to data infrastructures along with resources such as data storage. Keep in mind that there are many aspects to data infrastructures as well as storage all of which are evolving. Who Will Be At Top Of Storage World Next Decade? What say you?

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

New family of Intel Xeon Scalable Processors enable software defined data infrastructures (SDDI) and SDDC

Intel Xeon Scalable Processors SDDI and SDDC

server storage I/O data infrastructure trends

Today Intel announced a new family of Xeon Scalable Processors (aka Purely) that for some workloads Intel claims to be on average of 1.65x faster than their predecessors. Note your real improvement will vary based on workload, configuration, benchmark testing, type of processor, memory, and many other server storage I/O performance considerations.

Intel Scalable Xeon Processors
Image via Intel.com

In general the new Intel Xeon Scalable Processors enable legacy and software defined data infrastructures (SDDI), along with software defined data centers (SDDC), cloud and other environments to support expanding workloads more efficiently as well as effectively (e.g. boosting productivity).

Data Infrastructures and workloads

Some target application and environment workloads Intel is positioning these new processors for includes among others:

  • Machine Learning (ML), Artificial Intelligence (AI), advanced analytics, deep learning and big data
  • Networking including software defined network (SDN) and network function virtualization (NFV)
  • Cloud and Virtualization including Azure Stack, Docker and Kubernetes containers, Hyper-V, KVM, OpenStack VMware vSphere, KVM among others
  • High Performance Compute (HPC) and High Productivity Compute (e.g. the other HPC)
  • Storage including legacy and emerging software defined storage software deployed as appliances, systems or server less deployment modes.

Features of the new Intel Xeon Scalable Processors include:

  • New core micro architecture with interconnects and on die memory controllers
  • Sockets (processors) scalable up to 28 cores
  • Improved networking performance using Quick Assist and Data Plane Development Kit (DPDK)
  • Leverages Intel Quick Assist Technology for CPU offload of compute intensive functions including I/O networking, security, AI, ML, big data, analytics and storage functions. Functions that benefit from Quick Assist include cryptography, encryption, authentication, cipher operations, digital signatures, key exchange, loss less data compression and data footprint reduction along with data at rest encryption (DARE).
  • Optane Non-Volatile Dual Inline Memory Module (NVDIMM) for storage class memory (SCM) also referred to by some as Persistent Memory (PM), not to be confused with Physical Machine (PM).
  • Supports Advanced Vector Extensions 512  (AVX-512) for HPC and other workloads
  • Optional Omni-Path Fabrics in addition to 1/10Gb Ethernet among other I/O options
  • Six memory channels supporting up to 6TB of RDIMM with multi socket systems
  • From two to eight  sockets per node (system)
  • Systems support PCIe 3.x (some supporting x4 based M.2 interconnects)

Note that exact speeds, feeds, slots and watts will vary by specific server model and vendor options. Also note that some server system solutions have two or more nodes (e.g. two or more real servers) in a single package not to be confused with two or more sockets per node (system or motherboard). Refer to the where to learn more section below for links to Intel benchmarks and other resources.

Software Defined Data Infrastructures, SDDC, SDX and SDDI

What About Speeds and Feeds

Watch for and check out the various Intel partners who have or will be announcing their new server compute platforms based on Intel Xeon Scalable Processors. Each of the different vendors will have various speeds and feeds options that build on the fundamental Intel Xeon Scalable Processor capabilities.

For example Dell EMC announced their 14G server platforms at the May 2017 Dell EMC World event with details to follow (e.g. after the Intel announcements).

Some things to keep in mind include the amount of DDR4 DRAM (or Optane NVDIMM) will vary by vendors server platform configuration, motherboards, several sockets and DIMM slots. Also keep in mind the differences between registered (e.g. buffered RDIMM) that give good capacity and great performance, and load reduced DIMM (LRDIMM) that have great capacity and ok performance.

Various nvme options

What about NVMe

It’s there as these systems like previous Intel models support NVMe devices via PCIe 3.x slots, and some vendor solutions also supporting M.2 x4 physical interconnects as well.

server storageIO flash and SSD
Image via Software Defined Data Infrastructure Essentials (CRC)

Note that Broadcom formerly known as Avago and LSI recently announced PCIe based RAID and adapter cards that support NVMe attached devices in addition to SAS and SATA.

server storage data infrastructure sddi

What About Intel and Storage

In case you have not connected the dots yet, the Intel Xeon Scalable Processor based server (aka compute) systems are also a fundamental platform for storage systems, services, solutions, appliances along with tin-wrapped software.

What this means is that the Intel Xeon Scalable Processors based systems can be used for deploying legacy as well as new and emerging software-defined storage software solutions. This also means that the Intel platforms can be used to support SDDC, SDDI, SDX, SDI as well as other forms of legacy and software-defined data infrastructures along with cloud, virtual, container, server less among other modes of deployment.

Image Via Intel.com

Moving beyond server and compute platforms, there is another tie to storage as part of this recent as well as other Intel announcements. Just a few weeks ago Intel announced 64 layer triple level cell (TLC) 3D NAND solutions positioned for the client market (laptop, workstations, tablets, thin clients). Intel with that announcement increased the traditional aerial density (e.g. bits per square inch or cm) as well as boosting the number of layers (stacking more bits as well).

The net result is not only more bits per square inch, also more per cubic inch or cm. This is all part of a continued evolution of NAND flash including from 2D to 3D, MCL to TLC, 32 to 64 layer.  In other words, NAND flash-based Solid State Devices (SSDs) are very much still a relevant and continue to be enhanced technology even with the emerging 3D XPoint and Optane (also available via Amazon in M.2) in the wings.

server memory evolution
Via Intel and Micron (3D XPoint launch)

Keep in mind that NAND flash-based technologies were announced almost 20 years ago (1999), and are still evolving. 3D XPoint announced two years ago, along with other emerging storage class memories (SCM), non-volatile memory (NVM) and persistent memory (PM) devices are part of the future as is 3D NAND (among others). Speaking of 3D XPoint and Optane, Intel had announcements about that in the past as well.

Where To Learn More

Learn more about Intel Xeon Scalable Processors along with related technology, trends, tools, techniques and tips with the following links.

What This All Means

Some say the PC is dead and IMHO that depends on what you mean or define a PC as. For example if you refer to a PC generically to also include servers besides workstations or other devices, then they are alive. If however your view is that PCs are only workstations and client devices, then they are on the decline.

However if your view is that a PC is defined by the underlying processor such as Intel general purpose 64 bit x86 derivative (or descendent) then they are very much alive. Just as older generations of PCs leveraging general purpose Intel based x86 (and its predecessors) processors were deployed for many uses, so to are today’s line of Xeon (among others) processors.

Even with the increase of ARM, GPU and other specialized processors, as well as ASIC and FPGAs for offloads, the role of general purpose processors continues to increase, as does the technology evolution around. Even with so called server less architectures, they still need underlying compute server platforms for running software, which also includes software defined storage, software defined networks, SDDC, SDDI, SDX, IoT among others.

Overall this is a good set of announcements by Intel and what we can also expect to be a flood of enhancements from their partners who will use the new family of Intel Xeon Scalable Processors in their products to enable software defined data infrastructures (SDDI) and SDDC.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

GDPR (General Data Protection Regulation) Resources Are You Ready?

The new European General Data Protection Regulation (GDPR) go into effect in a year on May 25 2018 are you ready?

What Is GDPR

If your initial response is that you are not in Europe and do not need to be concerned about GDPR you might want to step back and review that thought. While it is possible that some organizations may not be affected by GDPR in Europe directly, there might be indirect considerations. For example, GDPR, while focused on Europe, has ties to other initiatives in place or being planned for elsewhere in the world. Likewise unlike earlier regulatory compliance that tended to focus on specific industries such as healthcare (HIPPA and HITECH) or financial (SARBOX, Dodd/Frank among others), these new regulations can be more far-reaching.

Where To Learn More

Acronis GDPR Resources

  • Acronis Outlines GDPR position

Quest GDPR Resources

Microsoft and Azure Cloud GDPR Resources

Do you have or know of relevant GDPR information and resources? Feel free to add them via comments or send us an email, however please watch the spam and sales pitches as they will be moderated.

What This All Means

Now is the time to start planning, preparing for GDPR if you have not done so and need to, as well as becoming more generally aware of it and other initiatives. One of the key takeaways is that while the word compliance is involved, there is much more to GDPR than just compliance as we have seen in the part. With GDPR and other initiatives data protection becomes the focus including privacy, protect, preserve, secure, serve as well as manage, have insight, awareness along with associated reporting.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Microsoft Windows Server, Azure, Nano Life cycle Updates

server storage I/O trends

Microsoft Windows Server, Azure, Nano and life cycle Updates

For those of you who have an interest in Microsoft Windows Server on-premises, on Azure, on Hyper-V or Nano life cycle here are some recently announced updates.
Microsoft Windows Server Nano Lifecycle

Microsoft has announced updates to Windows Server Core and Nano along with semi-annual channel updates (read more here). The synopsis of this new update via Microsoft (read more here) is:

In this new model, Windows Server releases are identified by  the year and month of release: for example, in 2017, a release in the 9th month  (September) would be identified as version 1709. Windows Server will release  semi-annually in fall and spring. Another release in March 2018 would be  version 1803. The support lifecycle for each release is 18 months.

Microsoft has announced that its lightweight variant of WIndows Server 2016 (if you need a refresh on server requirements visit here) known as nano will now be focused for WIndows based containers as opposed to being for bare metal. As part of this change, Microsoft has reiterated that Server Core the headless (aka non-desktop user interface) version of WIndows Server 2016 will continue as the platform for BM along with other deployments where a GUI interface is not needed. Note that one of the original premises of Nano was that it could be leveraged as a replacement for Server Core.

As part of this shift, Microsoft has also stated their intention to further streamline the already slimmed down version of WIndows Server known as Nano by reducing its size another 50%. Keep in mind that Nano is already a fraction of the footprint size of regular Windows Server (Core or Desktop UI). The footprint of Nano includes both its capacity size on disk (HDD or SSD), as well as its memory requirements, speed of startup boot, along with number of components that cut the number of updates.

By focusing Nano for container use (e.g. Windows containers) Microsoft is providing multiple micro services engines (e.g. Linux and Windows) along with various management including Docker. Similar to providing multiple container engines (e.g. Linux and Windows) Microsoft is also supporting management from Windows along with Unix.

Does This Confirm Rumor FUD that Nano is Dead

IMHO the answer to the FUD rumors that are circulating around that NANO is dead are false.

Granted Nano is being refocused by Microsoft for containers and will not be the lightweight headless Windows Server 2016 replacement for Server Core. Instead, the Microsoft focus is two path with continued enhancements on Server Core for headless full Windows Server 2016 deployment, while Nano gets further streamlined for containers. This means that Nano is no longer bare metal or Hyper-V focused with Microsoft indicating that Server Core should be used for those types of deployments.

What is clear (besides no bare metal) is that Microsoft is working to slim down Nano even further by removing bare metal items, Powershell,.Net and other items instead of making those into optional items. The goal of Microsoft is to make the base Nano image on disk (or via pull) as small as possible with the initial goal of being 50% of its current uncompressed 1GB disk size. What this means is that if you need Powershell, you add it as a layer, need .Net then add as a layer instead of having the overhead of those items if you do not need tem. It will be interesting to see how much Microsoft is able to remove as standard components and make them options that you can simply add as layers if needed.

What About Azure and Bring Your Own License

In case you were not aware or had forgotten when you use Microsoft Azure and deploy virtual machine (aka cloud instances), you have the option of bringing (e.g. using) your own WIndows Server licenses. What this means is that by using your own Windows Server licenses you can cut the monthly cost of your Azure VMs. Check out the Azure site and explore various configuration options to learn more about pricing and various virtual machine instances from Windows to Linux here as well as hybrid deployments.

Where To Learn More

What This All Means

Microsoft has refocused Windows Server 2016 Core and Desktop as its primary bare metal including for virtual as well as Azure OS platforms, while Nano is now focused on being optimized for Windows-based containers including Docker among other container orchestration.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

May 2017 Server StorageIO Data Infrastructures Update Newsletter

Volume 17, Issue V

Hello and welcome to the May 2017 issue of the Server StorageIO update newsletter.

Summer officially here in the northern hemisphere is still a few weeks away, however for all practical purposes it has arrived. What this means is that in addition to normal workplace activities and projects, there are plenty of outdoor things (as well as distractions) to attend to.

Over the past several months I have mentioned a new book that is due out this summer and which means it’s getting close to announcement time. The new book title is Software Defined Data Infrastructure Essentials – Cloud, Converged, and Virtual Fundamental Server Storage I/O Tradecraft (CRC PRess/Taylor Francis/Auerbach) that you can learn more about here (with more details being added soon). A common question is will there be electronic versions of the book and the answer is yes (more on this in future newsletter).

Data Infrastructures

Another common question is what is it about, what is a data infrastructure (see this post) and what is tradecraft (see this post). Software-Defined Data Infrastructures Essentials provides fundamental coverage of physical, cloud, converged, and virtual server storage I/O networking technologies, trends, tools, techniques, and tradecraft skills.

Software-Defined Data Infrastructures Essentials provides fundamental coverage of physical, cloud, converged, and virtual server storage I/O networking technologies, trends, tools, techniques, and tradecraft skills. From webscale, software-defined, containers, database, key-value store, cloud, and enterprise to small or medium-size business, the book is filled with techniques, and tips to help develop or refine your server storage I/O hardware, software, and services skills. Whether you are new to data infrastructures or a seasoned pro, you will find this comprehensive reference indispensable for gaining as well as expanding experience with technologies, tools, techniques, and trends.

Software-Defined Data Infrastructure Essentials SDDI SDDC
ISBN-13: 978-1498738156
ISBN-10: 149873815X
Hardcover: 672 pages
Publisher: Auerbach Publications; 1 edition (June 2017)
Language: English

Watch for more news and insight about my new book Software-Defined Data Infrastructure Essentials soon. In the meantime, check out the various items below in this edition of the Server StorageIO Update.

In This Issue

Enjoy this edition of the Server StorageIO update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

Some recent Industry Activities, Trends, News and Announcements include:

Flackbox.com has some new independent (non NetApp produced) learning resources including NetApp simulator eBook and MetroCluster tutorial. Over in the Microsoft world, Thomas Maurer has a good piece about Windows Server build 2017 and all about containers. Microsoft also announced SQL Server 2017 CTP 2.1 is now available. Meanwhile here are some my experiences and thoughts from test driving Microsoft Azure Stack.

Speaking of NetApp among other announcements they released a new version of their StorageGrid object storage software. NVMe activity in the industry (and at customer sites) continues to increase with Cavium Qlogic NVMe over Fabric news, along with Broadcom recent NVMe RAID announcements. Keep in mind that if the answer is NVMe, than what are the questions.

Here is a good summary of the recent OpenStack Boston Summit. Storpool did a momentum announcement which for those of you into software defined storage, add Storpool to your watch list. On the VMware front, check out this vSAN 6.6 demo (video) of stretched cluster via Yellow Bricks.

Check out other industry news, comments, trends perspectives here.

Server StorageIOblog Posts

Recent and popular Server StorageIOblog posts include:

View other recent as well as past StorageIOblog posts here

Server StorageIO Commentary in the news

Recent Server StorageIO industry trends perspectives commentary in the news.

Via EnterpriseStorageForum: What to Do with Legacy Assets in a Flash Storage World
There is still a place for hybrid arrays. A hybrid array is the home run when it comes to leveraging your existing non-flash, non-SSD based assets today.

Via EnterpriseStorageForum: Where All-Flash Storage Makes No Sense
A bit of flash in the right place can go a long way, and everybody can benefit from at least a some of flash somewhere. Some might say the more, the better. But where you have budget constraints that simply prevent you from having more flash for things such as cold, inactive, or seldom access data, you should explore other options.

Via Bitpipe: Changing With the Times – Protecting VMs(PDF)

Via FedTech: Storage Strategies: Agencies Optimize Data Centers by Focusing on Storage

Via SearchCloudStorage: Dell EMC cloud storage strategy needs to cut through fog

Via SearchStorage: Microsemi upgrades controllers based on HPE technology

Via EnterpriseStorageForum: 8 Data Machine Learning and AI Storage Tips

Via SiliconAngle: Dell EMC announces hybrid cloud platform for Azure Stack

View more Server, Storage and I/O trends and perspectives comments here

Events and Activities

Recent and upcoming event activities.

Sep. 13-15, 2017 – Fujifilm IT Executive Summit – Seattle WA

August 28-30, 2017 – VMworld – Las Vegas

Jully 22, 2017 – TBA

June 22, 2017 – Webinar – GDPR and Microsoft Environments

May 11, 2017 – Webinar – Email Archiving, Compliance and Ransomware

See more webinars and activities on the Server StorageIO Events page here.

Server StorageIO Industry Resources and Links

Useful links and pages:
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

Ok, nuff said, for now.

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

GDPR goes into effect May 25 2018 Are You Ready?

server storage I/O trends

GDPR goes into effect May 25 2018 Are You Ready?

The new European General Data Protection Regulation (GDPR) go into effect in a year on May 25 2018 are you ready?

Why Become GDPR Aware

If your initial response is that you are not in Europe and do not need to be concerned about GDPR you might want to step back and review that thought. While it is possible that some organizations may not be affected by GDPR in Europe directly, there might be indirect considerations. For example, GDPR, while focused on Europe, has ties to other initiatives in place or being planned for elsewhere in the world. Likewise unlike earlier regulatory compliance that tended to focus on specific industries such as healthcare (HIPPA and HITECH) or financial (SARBOX, Dodd/Frank among others), these new regulations can be more far-reaching.

GDPR Looking Beyond Compliance

Taking a step back, GDPR, as its name implies, is about general data protection including how information is protected, preserved, secured and served. This also includes taking safeguards to logically protect data with passwords, encryption among other techniques. Another dimension of GDPR is reporting and ability to track who has accessed what information (including when), as well as simply knowing what data you have.

What this means is that GDPR impacts users from consumers of social media such as Facebook, Instagram, Twitter, Linkedin among others, to cloud storage and related services, as well as traditional applications. In other words, GDPR is not just for finance, healthcare, it is more far-reaching making sure you know what data exists, and taking adequate steps to protect.

There is a lot more to discuss of GDPR in Europe as well as what else is being done in other parts of the world. For now being aware of initiatives such as GDPR and its broader scope impact besides traditional compliance is important. With these new initiatives, the focus expands from the compliance office or officers to the data protection office and data protection officer whose scope is to protect, preserve, secure and serve data along with associated information.

GDPR and Microsoft Environments

As part of generating awareness and help planning, I’m going to be presenting a free webinar produced by Redmond Magazine sponsored by Quest (who will also be a co-presenter) on June 22, 2017 (7AM PT). The title of the webinar is GDPR Compliance Planning for Microsoft Environments.

This webinar looks at the General Data Protection Regulation (GDPR) and its impact on Microsoft environments. Specifically, we look at how GDPR along with other future compliance directives impact Microsoft cloud, on-premises, and hybrid environments, as well as what you can do to be ready before the May 25, 2018 deadline. Join us for this discussion of what you need to know to plan and carry out a strategy to help address GDPR compliance regulations for Microsoft environments.

What you will learn during this discussion:

  • Why GDPR and other regulations impact your environment
  • How to assess and find compliance risks
  • How to discover who has access to sensitive resources
  • Importance of real-time auditing to monitor and alert on user access activity

This webinar applies to business professionals responsible for strategy, planning and policy decision-making for Microsoft environments along with associated applications. This includes security, compliance, data protection, system admins, architects and other IT professionals.

What This All Means

Now is the time to start planning, preparing for GDPR if you have not done so and need to, as well as becoming more generally aware of it and other initiatives. One of the key takeaways is that while the word compliance is involved, there is much more to GDPR than just compliance as we have seen in the part. With GDPR and other initiatives data protection becomes the focus including privacy, protect, preserve, secure, serve as well as manage, have insight, awareness along with associated reporting. Join me and Quest on June 22, 2017 7AM PT for the webinar GDPR Compliance Planning for Microsoft Environments to learn more.

Ok, nuff said, for now.

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Azure Stack Technical Preview 3 (TP3) Overview Preview Review

server storage I/O trends

Azure Stack Technical Preview 3 (TP3) Overview Preview Review

Perhaps you are aware or use Microsoft Azure, how about Azure Stack?

This is part one of a two-part series looking at Microsoft Azure Stack providing an overview, preview and review. Read part two here that looks at my experiences installing Microsoft Azure Stack Technical Preview 3 (TP3).

For those who are not aware, Azure Stack is a private on-premises extension of the Azure public cloud environment. Azure Stack now in technical preview three (e.g. TP3), or what you might also refer to as a beta (get the bits here).

In addition to being available via download as a preview, Microsoft is also working with vendors such as Cisco, Dell EMC, HPE, Lenovo and others who have announced Azure Stack support. Vendors such as Dell EMC have also made proof of concept kits available that you can buy including server with storage and software. Microsoft has also indicated that once launched for production versions scaling from a few to many nodes, that a single node proof of concept or development system will also remain available.

software defined data infrastructure SDDI and SDDC
Software-Defined Data Infrastructures (SDDI) aka Software-defined Data Centers, Cloud, Virtual and Legacy

Besides being an on-premises, private cloud variant, Azure Stack is also hybrid capable being able to work with public cloud Azure. In addition to working with public cloud Azure, Azure Stack services and in particular workloads can also work with traditional Microsoft, Linux and others. You can use pre built solutions from the Azure marketplace, in addition to developing your applications using Azure services and DevOps tools. Azure Stack enables hybrid deployment into public or private cloud to balance flexibility, control and your needs.

Azure Stack Overview

Microsoft Azure Stack is an on premise (e.g. in your own data center) private (or hybrid when connected to Azure) cloud platform. Currently Azure Stack is in Technical Preview 3 (e.g. TP3) and available as a proof of concept (POC) download from Microsoft. You can use Azure Stack TP3 as a POC for learning, demonstrating and trying features among other activities. Here is link to a Microsoft Video providing an overview of Azure Stack, and here is a good summary of roadmap, licensing and related items.

In summary, Microsoft Azure Stack is:

  • A onsite, on premise, in your data center extension of Microsoft Azure public cloud
  • Enabling private and hybrid cloud with strong integration along with common experiences with Azure
  • Adopt, deploy, leverage cloud on your terms and timeline choosing what works best for you
  • Common processes, tools, interfaces, management and user experiences
  • Leverage speed of deployment and configuration with a purpose-built integrate solution
  • Support existing and cloud native Windows, Linux, Container and other services
  • Available as a public preview via software download, as well as vendors offering solutions

What is Azure Stack Technical Preview 3 (TP3)

This version of Azure Stack is a single node running on a lone physical machine (PM) aka bare metal (BM). However can also be installed into a virtual machine (VM) using nesting. For example I have Azure Stack TP3 running nested on a VMware vSphere ESXi 6.5 systems with a Windows Server 2016 VM as its base operating system.

Microsoft Azure Stack architecture
Click here or on the above image to view list of VMs and other services (Image via Microsoft.com)

The TP3 POC Azure Stack is not intended for production environments, only for testing, evaluation, learning and demonstrations as part of its terms of use. This version of Azure Stack is associated with a single node identity such as Azure Active Directory (AAD) integrated with Azure, or Active Directory Federation Services (ADFS) for standalone modes. Note that since this is a single server deployment, it is not intended for performance, rather, for evaluating functionality, features, APIs and other activities. Learn more about Azure Stack TP3 details here (or click on image) including names of various virtual machines (VMs) as well as their roles.

Where to learn more

The following provide more information and insight about Azure, Azure Stack, Microsoft and Windows among related topics.

  • Azure Stack Technical Preview 3 (TP3) Overview Preview Review
  • Azure Stack TP3 Overview Preview Review Part II
  • Azure Stack Technical Preview (get the bits aka software download here)
  • Azure Stack deployment prerequisites (Microsoft)
  • Microsoft Azure Stack troubleshooting (Microsoft Docs)
  • Azure Stack TP3 refresh tips (Azure Stack)
  • Here is a good post with a tip about not applying certain Windows updates to Azure stack TP3 installs.
  • Configure Azure stack TP3 to be available on your own network (Azure Stack)
  • Azure Stack TP3 Marketplace syndication (Azure Stack)
  • Azure Stack TP3 deployment experiences (Azure Stack)
  • Frequently asked questions for Azure Stack (Microsoft)
  • Deploy Azure Stack (Microsoft)
  • Connect to Azure Stack (Microsoft)
  • Azure Active Directory (AAD) and Active Directory Federation Services (ADFS)
  • Azure Stack TP2 deployment experiences by Niklas Akerlund (@vNiklas) useful for tips for TP3
  • Deployment Checker for Azure Stack Technical Preview (Microsoft Technet)
  • Azure stack and other tools (Github)
  • How to enable nested virtualization on Hyper-V Windows Server 2016
  • Dell EMC announce Microsoft Hybrid Cloud Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack Data Sheet (Dell EMC PDF)
  • Dell EMC Cloud Chats (Dell EMC Blog)
  • Microsoft Azure stack forum
  • Dell EMC Microsoft Azure Stack solution
  • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016
  • Overview Review of Microsoft ReFS (Reliable File System) and resource links
  • Via WServerNews.com Cloud (Microsoft Azure) storage considerations
  • Via CloudComputingAdmin.com Cloud Storage Decision Making: Using Microsoft Azure for cloud storage
  • www.thenvmeplace.com, www.thessdplace.com, www.objectstoragecenter.com and www.storageio.com/converge
  • What this all means

    A common question is if there is demand for private and hybrid cloud, in fact, some industry expert pundits have even said private, or hybrid are dead which is interesting, how can something be dead if it is just getting started. Likewise, it is early to tell if Azure Stack will gain traction with various organizations, some of whom may have tried or struggled with OpenStack among others.

    Given a large number of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services as well as other platforms, along with continued growing popularity of Azure, having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V and if only for Windows guest operating systems. At this point indeed, Windows would be an attractive and comfortable option, however, given a large number of Linux-based guests running on Hyper-V as well as Azure Public, those are also primary candidates as are containers and other services.

    Continue reading more in part two of this two-part series here including installing Microsoft Azure Stack TP3.

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    Azure Stack TP3 Overview Preview Review Part II

    server storage I/O trends

    Azure Stack TP3 Overview Preview (Part II) Install Review

    This is part two of a two-part series looking at Microsoft Azure Stack with a focus on my experiences installing Microsoft Azure Stack Technical Preview 3 (TP3) including into a nested VMware vSphere ESXi environment. Read part one here that provides a general overview of Azure Stack.

    Azure Stack Review and Install

    Being familiar with Microsoft Azure public cloud having used it for a few years now, I wanted to gain some closer insight, experience, expand my trade craft on Azure Stack by installing TP3. This is similar to what I have done in the past with OpenStack, Hadoop, Ceph, VMware, Hyper-V and many others, some of which I need to get around to writing about sometime. As a refresher from part one of this series, the following is an image via Microsoft showing the Azure Stack TP3 architecture, click here or on the image to learn more including the names and functions of the various virtual machines (VMs) that make up Azure Stack.

    Microsoft Azure Stack architecture
    Click here or on the above image to view list of VMs and other services (Image via Microsoft.com)

    Whats Involved Installing Azure Stack TP3?

    The basic steps are as follows:

    • Read this Azure Stack blog post (Azure Stack)
    • Download the bits (e.g. the Azure Stack software) from here, where you access the Azure Stack Downloader tool.
    • Planning your deployment making decisions on Active Directory and other items.
    • Prepare the target server (physical machine aka PM, or virtual machine VM) that will be the Azure Stack destination.
    • Copy Azure Stack software and installer to target server and run pre-install scripts.
    • Modify PowerShell script file if using a VM instead of a PM
    • Run the Azure Stack CloudBuilder setup, configure unattend.xml if needed or answer prompts.
    • Server reboots, select Azure Stack from two boot options.
    • Prepare your Azure Stack base system (time, network NICs in static or DHCP, if running on VMware install VMtools)
    • Determine if you will be running with Azure Active Directory (AAD) or standalone Active Directory Federated Services (ADFS).
    • Update any applicable installation scripts (see notes that follow)
    • Deploy the script, then extended Azure Stack TP3 PoC as needed

    Note that this is a large download of about 16GB (23GB with optional WIndows Server 2016 demo ISO).

    Use the AzureStackDownloader tool to download the bits (about 16GB or 23GB with optional Windows Server 2016 base image) which will either be in several separate files which you stitch back together with the MicrosoftAzureStackPOC tool, or as a large VHDX file and smaller 6.8GB ISO (Windows Server 2016). Prepare your target server system for installation once you have all the software pieces downloaded (or do the preparations while waiting for download).

    Once you have the software downloaded, if it is a series of eight .bin files (7 about 2GB, 1 around 1.5GB), good idea to verify their checksums, then stitch them together on your target system, or on a staging storage device or file share. Note that for the actual deployment first phase, the large resulting cloudbuilder.vhdx file will need to reside in the C:\ root location of the server where you are installing Azure Stack.

    server storageio nested azure stack tp3 vmware

    Azure Stack deployment prerequisites (Microsoft) include:

    • At least 12 cores (or more), dual socket processor if possible
    • As much DRAM as possible (I used 100GB)
    • Put the operating system disk on flash SSD (SAS, SATA, NVMe) if possible, allocate at least 200GB (more is better)
    • Four x 140GB or larger (I went with 250GB) drives (HDD or SSD) for data deployment drives
    • A single NIC or adapter (I put mine into static instead of DHCP mode)
    • Verify your physical or virtual server BIOS has VT enabled

    The above image helps to set the story of what is being done. On the left is for bare metal (BM) or physical machine (PM) install of Azure Stack TP3, on the right, a nested VMware (vSphere ESXi 6.5) with virtual machine (VM) 11 approach. Note that you could also do a Hyper-V nested among other approaches. Shown in the image above common to both a BM or VM is a staging area (could be space on your system drive) where Azure Stack download occurs. If you use a separate staging area, then simply copy the individual .bin files and stitch together into the larger .VHDX, or, copy the larger .VHDX, which is better is up to your preferences.

    Note that if you use the nested approach, there are a couple of configuration (PowerShell) scripts that need to be updated. These changes are to trick the installer into thinking that it is on a PM when it checks to see if on physical or virtual environments.

    Also note that if using nested, make sure you have your VMware vSphere ESXi host along with specific VM properly configured (e.g. that virtualization and other features are presented to the VM). With vSphere ESXi 6.5 virtual machine type 11 nesting is night and day easier vs. earlier generations.

    Something else to explain here is that you will initially start the Azure Stack install preparation using a standard Windows Server (I used a 2016 version) where the .VHDX is copied into its C:\ root. From there you will execute some PowerShell scripts to setup some configuration files, one of which needs to be modified for nesting.

    Once those prep steps are done, there is a Cloudbuilder deploy script that gets run that can be done with an unattend.xml file or manual input. This step will cause a dual-boot option to be added to your server where you can select Azure Stack or your base prep Windows Server instance, followed by reboot.

    After the reboot occurs and you choose to boot into Azure Stack, this is the server instance that will actually run the deployment script, as well as build and launch all the VMs for the Azure Stack TP3 PoC. This is where I recommend having a rough sketch like above to annotate layers as you go to remember what layer working at. Don’t worry, it becomes much easier once all is said and done.

    Speaking of preparing your server, refer to Microsoft specs, however in general give the server as much RAM and cores as possible. Also if possible place the system disk on a flash SSD (SAS, SATA, NVMe) and make sure that it has at least 200GB, however 250 or even 300GB is better (just in case you need more space).

    Additional configuration tips include allocating four data disks for Azure, if possible make these SSDs as well as, however more important IMHO to have at least the system on fast flash SSD. Another tip is to enable only one network card or NIC and put it into static vs. DHCP address mode to make things easier later.

    Tip: If running nested, vSphere 6.5 worked the smoothest as had various issues or inconsistencies with earlier VMware versions, even with VMs that ran nested just fine.

    Tip: Why run nested? Simple, I wanted to be able to use using VMware tools, do snapshots to go back in time, plus share the server with some other activities until ready to give Azure Stack TP3 its own PM.

    Tip: Do not connect the POC machine to the following subnets (192.168.200.0/24, 192.168.100.0/27, 192.168.101.0/26, 192.168.102.0/24, 192.168.103.0/25, 192.168.104.0/25) as Azure Stack TP3 uses those.

    storageio azure stack tp3 vmware configuration

    Since I decided to use a nested VM deploying using VMware, there were a few extra steps needed that I have included as tips and notes. Following is view via vSphere client of the ESXi host and VM configuration.

    The following image combines a couple of different things including:

    A: Showing the contents of C:\Azurestack_Supportfiles directory

    B: Modifying the PrepareBootFromVHD.ps1 file if deploying on virtual machine (See tips and notes)

    C: Showing contents of staging area including individual .bin files along with large CloudBuilder.vhdx

    D: Running the PowerShell script commands to prepare the PrepareBootFromVHD.ps1 and related items

    prepariing azure stack tp3 cloudbuilder for nested vmware deployment

    From PowerShell (administrator):

    # Variables
    $Uri = 'https://raw.githubusercontent.com/Azure/Azure stack/master/Deployment/'
    $LocalPath = 'c:\AzureStack_SupportFiles'

    # Create folder
    New-Item $LocalPath -type directory

    # Download files
    ( 'BootMenuNoKVM.ps1', 'PrepareBootFromVHD.ps1', 'Unattend.xml', 'unattend_NoKVM.xml') | foreach { Invoke-WebRequest ($uri + $_) -OutFile ($LocalPath + '\' + $_) }

    After you do the above, decide if you will be using an Unattend.xml or manual entry of items for building the Azure Stack deployment server (e.g. a Windows Server). Note that the above PowerShell script created the C:\azurestack_supportfiles folder and downloads the script files for building the cloud image using the previously downloaded Azure Stack CloudBuilder.vhdx (which should be in C:\).

    Note and tip is that if you are doing a VMware or virtual machine based deployment of TP3 PoC, you will need to change C:\PrepareBootFromVHD.ps1 in the Azure Stack support files folder. Here is a good resource on what gets changed via Github that shows an edit on or about line 87 of PrepareBootFromVHD.ps1. If you run the PrepareBootFromVHD.ps1 script on a virtual machine you will get an error message, the fix is relatively easy (after I found this post).

    Look in PrepareBootFromVHD.ps1 for something like the following around line 87:

    if ((get-disk | where {$_.isboot -eq $true}).Model -match 'Virtual Disk')       {      Write-Host "The server is currently already booted from a virtual hard disk, to boot the server from the CloudBuilder.vhdx you will need to run this script on an Operating System that is installed on the physical disk of this server."      Exit      }
    

    You can either remove the "exit" command, or, change the test for "Virtual Disk" to something like "X", for fun I did both (and it worked).

    Note that you only have to make the above and another change in a later step if you are deploying Azure Stack TP3 as a virtual machine.

    Once you are ready, go ahead and launch the PrepareBootFromVHD.ps1 script which will set the BCDBoot entry (more info here).

    azure stack tp3 cloudbuilder nested vmware deployment

    You will see a reboot and install, this is installing what will be called the physical instance. Note that this is really being installed on the VM system drive as a secondary boot option (e.g. azure stack).

    azure stack tp3 dual boot option

    After the reboot, login to the new Azure Stack base system and complete any configuration including adding VMware Tools if using VMware nested. Some other things to do include make sure you have your single network adapter set to static (makes things easier), and any other updates or customizations. Before you run the next steps, you need to decide if going to use Azure Active Directory (AAD) or local ADFS.

    Note that if you are not running on a virtual machine, simply open a PowerShell (administrator) session, and run the deploy script. Refer to here for more guidance on the various options available including discussion on using AAD or ADFS.

    Note if you run the deployment script on a virtual machine, you will get an error which is addressed in the next section, otherwise, sit back and watch the progress..

    CloudBuilder Deployment Time

    Once you have your Azure Stack deployment system and environment ready, including a snapshot if on virtual machine, launch the PowerShell deployment script. Note that you will need to have decided if deploying with Azure Active Directory (AAD) or Azure Directory Federated Services (ADFS) for standalone aka submarine mode. There are also other options you can select as part of the deployment discussed in the Azure Stack tips here (a must read) and here. I chose to do a submarine mode (e.g. not connected to Public Azure and AAD) deployment.

    From PowerShell (administrator):

    cd C:\CloudDeployment:\Setup
    $adminpass = ConvertTo-SecureString "youradminpass" -AsPlainText -Force
    .\InstallAzureStackPOC.ps1 -AdminPassword $adminpass -UseADFS

    Deploying on VMware Virtual Machines Tips

    Here is a good tip via Gareth Jones (@garethjones294) that I found useful for updating one of the deployment script files (BareMetal_Tests.ps1 located in C:\CloudDeployment\Roles\PhysicalMachines\Tests folder) so that it would skip the bare metal (PM) vs. VM tests. Another good resource, even though it is for TP2 and early versions of VMware is TP2 deployment experiences by Niklas Akerlund (@vNiklas).

    Note that this is a bit of a chick and egg scenario unless you are proficient at digging into script files since the BareMetal_Tests.ps1 file does not get unpacked until you run the CloudBuilder deployment script. If you run the script and get an error, then make the changes below, and rerun the script as noted. Once you make the modification to the BareMetal_Tests.ps1 file, keep a copy in a safe place for future use.

    Here are some more tips for deploying Azure Stack on VMware,

    Per the tip mentioned about via Gareth Jones (tip: read Gareths post vs. simply cut and paste the following which is more of a guide):

    Open BareMetal_Tests.ps1 file in PowerShell ISE and navigate to line 376 (or in that area)
    Change $false to $true which will stop the script failing when checking to see if the Azure Stack is running inside a VM.
    Next go to line 453.
    Change the last part of the line to read “Should Not BeLessThan 0”
    This will stop the script checking for the required amount of cores available.

    After you make the above correction as with any error (and fix) during Azure Stack TP3 PoC deployment, simply run the following.

    cd C:\CloudDeployment\Setup
    .\InstallAzureStackPOC.ps1 -rerun
    

    Refer to the extra links in the where to learn more section below that offer various tips, tricks and insight that I found useful, particular for deploying on VMware aka nested. Also in the links below are tips on general Azure Stack, TP2, TP3, adding services among other insight.

    starting azure stack tp3 deployment

    Tip: If you are deploying Azure Stack TP3 PoC on virtual machine, once you start the script above, copy the modified BareMetal_Tests.ps1 file

    Once the CloudBuilder deployment starts, sit back and wait, if you are using SSDs, it will take a while, if using HDDs, it will take a long while (up to hours), however check in on it now and then to see progress of if any errors. Note that some of the common errors will occur very early in the deployment such as the BareMetal_Tests.ps1 mentioned above.

    azure stack tp3 deployment finished

    Checking in periodically to see how the deployment progress is progressing, as well as what is occurring. If you have the time, watch some of the scripts as you can see some interesting things such as the software defined data center (SDDC) aka software-defined data infrastructure (SDDC) aka Azure Stack virtual environment created. This includes virtual machine creation and population, creating the software defined storage using storage spaces direct (S2D), virtual network and active directory along with domain controllers among others activity.

    azure stack tp3 deployment progress

    After Azure Stack Deployment Completes

    After you see the deployment completed, you can try accessing the management portal, however there may be some background processing still running. Here is a good tip post on connecting to Azure Stack from Microsoft using Remote Desktop (RDP) access. Use RDP from the Azure Stack deployment Windows Server and connect to a virtual machine named MAS-CON01, launch Server Manager and for Local Server disable Internet Explorer Enhanced Security (make sure you are on the right system, see the tip mentioned above). Disconnect from MAS-CON01 (refer to the Azure Stack architecture image above), then reconnect, and launch Internet Explorer with an URL of (note documentation side to use which did not work for me).

    Note the username for the Azure Stack system is AzureStack\AzureStackAdmin with a password of what you set for administrative during setup. If you get an error, verify the URLs, check your network connectivity, wait a few minutes as well as verify what server you are trying to connect from and too. Keep in mind that even if deploying on a PM or BM (e.g. non virtual server or VM), the Azure Stack deployment TP3 PoC creates a "virtual" software-defined environment with servers, storage (Azure Stack uses Storage Spaces Direct [S2D] and software defined network.

    accessing azure stack tp3 management portal dashboard

    Once able to connect to Azure Stack, you can add new services including virtual machine image instances such as Windows (use the Server 2016 ISO that is part of Azure Stack downloads), Linux or others. You can also go to these Microsoft resources for some first learning scenarios, using the management portals, configuring PowerShell and troubleshooting.

    Where to learn more

    The following provide more information and insight about Azure, Azure Stack, Microsoft and Windows among related topics.

  • Azure Stack Technical Preview 3 (TP3) Overview Preview Review
  • Azure Stack TP3 Overview Preview Review Part II
  • Azure Stack Technical Preview (get the bits aka software download here)
  • Azure Stack deployment prerequisites (Microsoft)
  • Microsoft Azure Stack troubleshooting (Microsoft Docs)
  • Azure Stack TP3 refresh tips (Azure Stack)
  • Here is a good post with a tip about not applying certain Windows updates to AzureStack TP3 installs.
  • Configure Azure Stack TP3 to be available on your own network (Azure Stack)
  • Azure Stack TP3 Marketplace syndication (Azure Stack)
  • Azure Stack TP3 deployment experiences (Azure Stack)
  • Frequently asked questions for Azure Stack (Microsoft)
  • Azure Active Directory (AAD) and Active Directory Federation Services (ADFS)
  • Deploy Azure Stack (Microsoft)
  • Connect to Azure Stack (Microsoft)
  • Azure Stack TP2 deployment experiences by Niklas Akerlund (@vNiklas) useful for tips for TP3
  • Deployment Checker for Azure Stack Technical Preview (Microsoft Technet)
  • Azure stack and other tools (Github)
  • How to enable nested virtualization on Hyper-V Windows Server 2016
  • Dell EMC announce Microsoft Hybrid Cloud Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack Data Sheet (Dell EMC PDF)
  • Dell EMC Cloud Chats (Dell EMC Blog)
  • Microsoft Azure stack forum
  • Dell EMC Microsoft Azure Stack solution
  • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016
  • Overview Review of Microsoft ReFS (Reliable File System) and resource links
  • Via WServerNews.com Cloud (Microsoft Azure) storage considerations
  • Via CloudComputingAdmin.com Cloud Storage Decision Making: Using Microsoft Azure for cloud storage
  • www.thenvmeplace.com, www.thessdplace.com, www.objectstoragecenter.com and www.storageio.com/converge
  • What this all means

    A common question is if there is demand for private and hybrid cloud, in fact, some industry expert pundits have even said private, or hybrid are dead which is interesting, how can something be dead if it is just getting started. Likewise, it is early to tell if Azure Stack will gain traction with various organizations, some of whom may have tried or struggled with OpenStack among others.

    Given a large number of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services as well as other platforms, along with continued growing popularity of Azure, having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V and if only for Windows guest operating systems. At this point indeed, Windows would be an attractive and comfortable option, however, given a large number of Linux-based guests running on Hyper-V as well as Azure Public, those are also primary candidates as are containers and other services.

    software defined data infrastructures SDDI and SDDC

    Some will say that if OpenStack is struggling in many organizations and being free open source, how Microsoft can have success with Azure Stack. The answer could be that some organizations have struggled with OpenStack while others have not due to lack of commercial services and turnkey support. Having installed both OpenStack and Azure Stack (as well as VMware among others), Azure Stack is at least the TP3 PoC is easy to install, granted it is limited to one node, unlike the production versions. Likewise, there are easy to use appliance versions of OpenStack that are limited in scale, as well as more involved installs that unlock full functionality.

    OpenStack, Azure Stack, VMware and others have their places, alongside, or supporting containers along with other tools. In some cases, those technologies may exist in the same environment supporting different workloads, as well as accessing various public clouds, after all, Hybrid is the home run for many if not most legality IT environments.

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    Dell EMC Announce Azure Stack Hybrid Cloud Solution

    server storage I/O trends

    Dell EMC Azure Stack Hybrid Cloud Solution

    Dell EMC have announced their Microsoft Azure Stack hybrid cloud platform solutions. This announcement builds upon earlier statements of support and intention by Dell EMC to be part of the Microsoft Azure Stack community. For those of you who are not familiar, Azure Stack is an on premise extension of Microsoft Azure public cloud.

    What this means is that essentially you can have the Microsoft Azure experience (or a subset of it) in your own data center or data infrastructure, enabling cloud experiences and abilities at your own pace, your own way with control. Learn more about Microsoft Azure Stack including my experiences with and installing Technique Preview 3 (TP3) here.

    software defined data infrastructures SDDI and SDDC

    What Is Azure Stack

    Microsoft Azure Stack is an on-premises (e.g. in your own data center) private (or hybrid when connected to Azure) cloud platform. Currently Azure Stack is in Technical Preview 3 (e.g. TP3) and available as a proof of concept (POC) download from Microsoft. You can use Azure Stack TP3 as a POC for learning, demonstrating and trying features among other activities. Here is link to a Microsoft Video providing an overview of Azure Stack, and here is a good summary of roadmap, licensing and related items.

    In summary, Microsoft Azure Stack and this announcement is about:

    • A onsite, on-premises, in your data center extension of Microsoft Azure public cloud
    • Enabling private and hybrid cloud with good integration along with shared experiences with Azure
    • Adopt, deploy, leverage cloud on your terms and timeline choosing what works best for you
    • Common processes, tools, interfaces, management and user experiences
    • Leverage speed of deployment and configuration with a purpose-built integrated solution
    • Support existing and cloud-native Windows, Linux, Container and other services
    • Available as a public preview via software download, as well as vendors offering solutions

    What Did Dell EMC Announce

    Dell EMC announced their initial product, platform solutions, and services for Azure Stack. This includes a Proof of Concept (PoC) starter kit (PE R630) for doing evaluations, prototype, training, development test, DevOp and other initial activities with Azure Stack. Dell EMC also announced a larger for production deployment, or large-scale development, test DevOp activity turnkey solution. The initial production solution scales from 4 to 12 nodes, or from 80 to 336 cores that include hardware (server compute, memory, I/O and networking, top of rack (TOR) switches, management, Azure Stack software along with services. Other aspects of the announcement include initial services in support of Microsoft Azure Stack and Azure cloud offerings.
    server storage I/O trends
    Image via Dell EMC

    The announcement builds on joint Dell EMC Microsoft experience, partnerships, technologies and services spanning hardware, software, on site data center and public cloud.
    server storage I/O trends
    Image via Dell EMC

    Dell EMC along with Microsoft have engineered a hybrid cloud platform for organizations to modernize their data infrastructures enabling faster innovate, accelerate deployment of resources. Includes hardware (server compute, memory, I/O networking, storage devices), software, services, and support.
    server storage I/O trends
    Image via Dell EMC

    The value proposition of Dell EMC hybrid cloud for Microsoft Azure Stack includes consistent experience for developers and IT data infrastructure professionals. Common experience across Azure public cloud and Azure Stack on-premises in your data center for private or hybrid. This includes common portal, Powershell, DevOps tools, Azure Resource Manager (ARM), Azure Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), Cloud Infrastructure and associated experiences (management, provisioning, services).
    server storage I/O trends
    Image via Dell EMC

    Secure, protect, preserve and serve applications VMs hosted on Azure Stack with Dell EMC services along with Microsoft technologies. Dell EMC data protection including backup and restore, Encryption as a Service, host guard and protected VMs, AD integration among other features.
    server storage I/O trends
    Image via Dell EMC

    Dell EMC services for Microsoft Azure Stack include single contact support for prepare, assessment, planning; deploy with rack integration, delivery, configuration; extend the platform with applicable migration, integration with Office 365 and other applications, build new services.
    server storage I/O trends
    Image via Dell EMC

    Dell EMC Hyper-converged scale out solutions range from minimum of 4 x PowerEdge R730XD (total raw specs include 80 cores (4 x 20), 1TB RAM (4 x 256GB), 12.8TB SSD Cache, 192TB Storage, plus two top of row network switches (Dell EMC) and 1U management server node. Initial maximum configuration raw specification includes 12 x R730XD (total 336 cores), 6TB memory, 86TB SSD cache, 900TB storage along with TOR network switch and management server.

    The above configurations initially enable HCI nodes of small (low) 20 cores, 256GB memory, 5.7TB SSD cache, 40TB storage; mid size 24 cores, 384GB memory, 11.5TB cache and 60TB storage; high-capacity with 28 cores, 512GB memory, 11.5TB cache and 80TB storage per node.
    server storage I/O trends
    Image via Dell EMC

    Dell EMC Evaluator program for Microsoft Azure Stack including the PE R630 for PoCs, development, test and training environments. The solution combines Microsoft Azure Stack software, Dell EMC server with Intel E5-2630 (10 cores, 20 threads / logical processors or LPs), or Intel E5-2650 (12 cores, 24 threads / LPs). Memory is 128GB or 256GB, storage includes flash SSD (2 x 480GB SAS) and HDD (6 x 1TB SAS).
    and networking.
    server storage I/O trends
    Image via Dell EMC

    Collaborative support single contact between Microsoft and Dell EMC

    Who Is This For

    This announcement is for any organization that is looking for an on-premises, in your data center private or hybrid cloud turnkey solution stack. This initial set of announcements can be for those looking to do a proof of concept (PoC), advanced prototype, support development test, DevOp or gain cloud-like elasticity, ease of use, rapid procurement and other experiences of public cloud, on your terms and timeline. Naturally, there is a strong affinity and seamless experience for those already using, or planning to use Azure Public Cloud for Windows, Linux, Containers and other workloads, applications, and services.

    What Does This Cost

    Check with your Dell EMC representative or partner for exact pricing which varies for the size and configurations. There are also various licensing models to take into consideration if you have Microsoft Enterprise License Agreements (ELAs) that your Dell EMC representative or business partner can address for you. Likewise being cloud based, there is also time usage-based options to explore.

    Where to learn more

    What this all means

    The dust is starting to settle on last falls Dell EMC integration, both of whom have long histories working with, and partnering along with Microsoft on legacy, as well as virtual software-defined data centers (SDDC), software-defined data infrastructures (SDDI), native, and hybrid clouds. Some may view the Dell EMC VMware relationship as a primary focus, however, keep in mind that both Dell and EMC had worked with Microsoft long before VMware came into being. Likewise, Microsoft remains one of the most commonly deployed operating systems on VMware-based environments. Granted Dell EMC have a significant focus on VMware, they both also sell, service and support many services for Microsoft-based solutions.

    What about Cisco, HPE, Lenovo among others who have to announce or discussed their Microsoft Azure Stack intentions? Good question, until we hear more about what those and others are doing or planning, there is not much more to do or discuss beyond speculating for now. Another common question is if there is demand for private and hybrid cloud, in fact, some industry expert pundits have even said private, or hybrid are dead which is interesting, how can something be dead if it is just getting started. Likewise, it is early to tell if Azure Stack will gain traction with various organizations, some of whom may have tried or struggled with OpenStack among others.

    Given a large number of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services as well as other platforms, along with continued growing popularity of Azure, having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V and if only for Windows guest operating systems. At this point indeed, Windows would be an attractive and comfortable option, however, given a large number of Linux-based guests running on Hyper-V as well as Azure Public, those are also primary candidates as are containers and other services.

    Overall, this is an excellent and exciting move for both Microsoft extending their public cloud software stack to be deployed within data centers in a hybrid way, something that those customers are familiar with doing. This is a good example of hybrid being spanning public and private clouds, remote and on-premises, as well as familiarity and control of traditional procurement with the flexibility, elasticity experience of clouds.

    software defined data infrastructures SDDI and SDDC

    Some will say that if OpenStack is struggling in many organizations and being free open source, how Microsoft can have success with Azure Stack. The answer could be that some organizations have struggled with OpenStack while others have not due to lack of commercial services and turnkey support. Having installed both OpenStack and Azure Stack (as well as VMware among others), Azure Stack is at least the TP3 PoC is easy to install, granted it is limited to one node, unlike the production versions. Likewise, there are easy to use appliance versions of OpenStack that are limited in scale, as well as more involved installs that unlock full functionality.

    OpenStack, Azure Stack, VMware and others have their places, along, or supporting containers along with other tools. In some cases, those technologies may exist in the same environment supporting different workloads, as well as accessing various public clouds, after all, Hybrid is the home run for many if not most legality IT environments.

    Overall this is a good announcement from Dell EMC for those who are interested in, or should become more aware about Microsoft Azure Stack, Cloud along with hybrid clouds. Likewise look forward to hearing more about the solutions from others who will be supporting Azure Stack as well as other hybrid (and Virtual Private Clouds).

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    March 2017 Server StorageIO Data Infrastructure Update Newsletter

    Volume 17, Issue III

    Hello and welcome to the March 2017 issue of the Server StorageIO update newsletter.

    First a reminder world backup (and recovery) day is on March 31. Following up from the February Server StorageIO update newsletter that had a focus on data protection this edition includes some additional posts, articles, tips and commentary below.

    Other data infrastructure (and tradecraft) topics in this edition include cloud, virtual, server, storage and I/O including NVMe as well as networks. Industry trends include new technology and services announcements, cloud services, HPE buying Nimble among other activity. Check out the Converged Infrastructure (CI), Hyper-Converged (HCI) and Cluster in Box (or Cloud in Box) coverage including a recent SNIA webinar I was invited to be the guest presenter for, along with companion post below.

    In This Issue

    Enjoy this edition of the Server StorageIO update newsletter.

    Cheers GS

    Data Infrastructure and IT Industry Activity Trends

    Some recent Industry Activities, Trends, News and Announcements include:

    Dell EMC has discontinued the NVMe direct attached shared DSSD D5 all flash array has been discontinued. At about the same time Dell EMC is shutting down the DSSD D5 product, it has also signaled they will leverage the various technologies including NVMe across their broad server storage portfolio in different ways moving forward. While Dell EMC is shutting down DSSD D5, they are also bringing additional NVMe solutions to the market including those they have been shipping for years (e.g. on the server-side). Learn more about DSSD D5 here and here including perspectives of how it could have been used (plays for playbooks).

    Meanwhile NVMe industry activity continues to expand with different solutions from startups such as E8, Excelero, Everspin, Intel, Mellanox, Micron, Samsung and WD SANdisk among others. Also keep in mind, if the answer is NVMe, then what were and are the questions to ask, as well as what are some easy to use benchmark scripts (using fio, diskspd, vdbench, iometer).

    Speaking of NVMe, flash and SSDs, Amazon Web Services (AWS) have added new Elastic Cloud Compute (EC2) storage and I/O optimized i3 instances. These new instances are available in various configurations with different amounts of vCPU (cores or logical processors), memory and NVMe SSD capacities (and quantity) along with price.

    Note that the price per i3 instance varies not only by its configuration, also for image and region deployed in. The flash SSD capacities range from an entry-level (i3.large) with 2 vCPU (logical processors), 15.25GB of RAM and a single 475GB NVMe SSD that for example in the US East Region was recently priced at $0.156 per hour. At the high-end there is the i3.16xlarge with 64 vCPU (logical processors), 488GB RAM and 8 x 1900GB NVMe SSDs with a recent US East Region price of $4.992 per hour. Note that the vCPU refers to the available number of logical processors available and not necessarily cores or sockets.

    Also note that your performance will vary, and while NVMe protocol tends to use less CPU per I/O, if generating a large number of I/Os you will need some CPU. What this means is that if you find your performance limited compared to expectations with the lower end i3 instances, move up to a larger instance and see what happens. If you have a Windows-based environment, you can use a tool such as Diskspd to see what happens with I/O performance as you decrease the number of CPUs used.

    Chelsio has announced they are now Microsoft Azure Stack Certified with their iWARP RDMA host adapter solutions, as well as for converged infrastructure (CI), hyper-converged (HCI) and legacy server storage deployments. As part of the announcement, Chelsio is also offering a 30 day no cost trial of their adapters for Microsoft Azure Stack, Windows Server 2016 and Windows 10 client environments. Learn more about the Chelsio trial offer here.

    Everspin (the MRAM Spintorque, persistent RAM folks) have announced a new Storage Class Memory (SCM) NVMe accessible family (nvNITRO) of storage accelerator devices (PCIe AiC, U.2). Whats interesting about Everspin is that they are using NVMe for accessing their persistent RAM (e.g. MRAM) making it easily plug compatible with existing operating systems or hypervisors. This means using standard out of the box NVMe drivers where the Everspin SCM appears as a block device (for compatibility) functioning as a low latency, high performance persistent write cache.

    Something else interesting besides making the new memory compatible with existing servers CPU complex via PCIe, is how Everspin is demonstrating that NVMe as a general access protocol is not just exclusive to nand flash-based SSDs. What this means is that instead of using non-persistent DRAM, or slower NAND flash (or 3D XPoint SCM), Everspin nvNITRO enables high endurance write cache with persistent to compliment existing NAND flash as well as emerging 3D XPoint based storage. Keep an eye on Everspin as they are doing some interesting things for future discussions.

    Google Cloud Services has added additional regions (cloud locations) and other enhancements.

    HPE continued buying into server storage I/O data infrastructure technologies announcing an all cash (e.g. no stock) acquisition of Nimble Storage (NMBL). The cash acquisition for a little over $1B USD amounts to $12.50 USD per Nimble share, double what it had traded at. As a refresh, or overview, Nimble is an all flash shared storage system leverage NAND flash solid storage device (SSD) performance. Note that Nimble also partners with Cisco and Lenovo platforms that compete with HPE servers for converged systems.View additional perspectives here.

    Riverbed has announced the release of Steelfusion 5 which while its name implies physical hardware metal, the solution is available as tin wrapped (e.g. hardware appliance) software. However the solution is also available for deployment as a VMware virtual appliance for remote office branch office (ROBO) among others. Enhancements include converged functionality such as NAS support along with network latency as well as bandwidth among other features.

    Check out other industry news, comments, trends perspectives here.

    Server StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past StorageIOblog posts here

    Server StorageIO Commentary in the news

    Recent Server StorageIO industry trends perspectives commentary in the news.

    Via InfoStor: 8 Big Enterprise SSD Trends to Expect in 2017
    Watch for increased capacities at lower cost, differentiation awareness of high-capacity, low-cost and lower performing SSDs versus improved durability and performance along with cost capacity enhancements for active SSD (read and write optimized). You can also expect increased support for NVMe both as a back-end storage device with different form factors (e.g., M.2 gum sticks, U.2 8639 drives, PCIe cards) as well as front-end (e.g., storage systems that are NVMe-attached) including local direct-attached and fiber-attached. This means more awareness around NVMe both as front-end and back-end deployment options.

    Via SearchITOperations: Storage performance bottlenecks
    Sometimes it takes more than an aspirin to cure a headache. There may be a bottleneck somewhere else, in hardware, software, storage system architecture or something else.

    Via SearchDNS: Parsing through the software-defined storage hype
    Beyond scalability, SDS technology aims for freedom from the limits of proprietary hardware.

    Via InfoStor: Data Storage Industry Braces for AI and Machine Learning
    AI could also lead to untapped hidden or unknown value in existing data that has no or little perceived value

    Via SearchDataCenter: New options to evolve data backup recovery

    View more Server, Storage and I/O trends and perspectives comments here

    Various Tips, Tools, Technology and Tradecraft Topics

    Recent Data Infrastructure Tradecraft Articles, Tips, Tools, Tricks and related topics.

    Via ComputerWeekly: Time to restore from backup: Do you know where your data is?
    Via IDG/NetworkWorld: Ensure your data infrastructure remains available and resilient
    Via IDG/NetworkWorld: Whats a data infrastructure?

    Check out Scott Lowe @Scott_Lowe of VMware fame who while having a virtual networking focus has a nice roundup of related data infrastructure topics cloud, open source among others.

    Want to take a break from reading or listening to tech talk, check out some of the fun videos including aerial drone (and some technology topics) at www.storageio.tv.

    View more tips and articles here

    Events and Activities

    Recent and upcoming event activities.

    May 8-10, 2017 – Dell EMCworld – Las Vegas

    April 3-7, 2017 – Seminars – Dutch workshop seminar series – Nijkerk Netherlands

    March 15, 2017 – Webinar – SNIA/BrightTalkHyperConverged and Storage – 10AM PT

    January 26 2017 – Seminar – Presenting at Wipro SDx Summit London UK

    See more webinars and activities on the Server StorageIO Events page here.


    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book Software-Defined Data Infrastructure Essentials(CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    HPE Continues Buying Into Server Storage I/O Data Infrastructures

    Storage I/O Data Infrastructures trends
    Updated 1/16/2018

    HPE expanded its Storage I/O Data Infrastructures portfolio buying into server storage I/O data infrastructure technologies announcing an all cash (e.g. no stock) acquisition of Nimble Storage (NMBL). The cash acquisition for a little over $1B USD amounts to $12.50 USD per Nimble share, double what it had traded at. As a refresh, or overview, Nimble is an all flash shared storage system leverage NAND flash solid storage device (SSD) performance. Note that Nimble also partners with Cisco and Lenovo platforms that compete with HPE servers for converged systems.

    Earlier this year (keep in mind its only mid-March) HPE also announced acquisition of server storage Hyper-Converged Infrastructure (HCI) vendor Simplivity (about $650M USD cash). In another investment this year HPE joined other investors as part of scale out and software defined storage startups Hedvig latest funding round (more on that later). These acquisitions are in addition to smaller ones such as last years buying of SGI, not to mention various divestitures.

    Data Infrastructures

    What Are Server Storage I/O Data Infrastructures Resources

    Data Infrastructures exists to support business, cloud and information technology (IT) among other applications that transform data into information or services. The fundamental role of data infrastructures is to give a platform environment for applications and data that is resilient, flexible, scalable, agile, efficient as well as cost-effective.

    Technologies that make up data infrastructures include hardware, software, cloud or managed services, servers, storage, I/O and networking along with people, processes, policies along with various tools spanning legacy, software-defined virtual, containers and cloud.

    HPE and Server Storage Acquisitions

    HPE and its predecessor HP (e.g. before the split that resulted in HPE) was familiar with expanding its data infrastructure portfolio spanning servers, storage, I/O networking, hardware, software and services. These range from Compaq who acquired DEC which gave them the StorageWorks brand and product line up (e.g. recall EVA and its predecessors), Lefthand, 3PAR, IBRIX, Polyserve, Autonomy, EDS and others that I’m guessing some at HPE (along with customers and partners) might not want to remember.

    In addition to their own in-house including via technology acquisition, HPE also partners for its entry-level and volume low-end MSA (Modular Storage Array) series with DotHill who was acquired by Seagate a year or so ago. In addition to the MSA, other HPE OEMs for storage include Hitachi Ltd. (e.g. parent of Hitachi Data Systems aka HDS) reselling their high-end enterprise class storage system as the XP7, as well as various other partner arrangements.

    Keep in mind that HPE has a large server business from low to high-end, spanning towers to dense blades to dual, quad and cluster in box (CiB) configurations with various processor architectures. Some of these servers are used as platforms for not only HPE, also other vendors software defined storage, as well as tin wrapped software solutions, appliances and systems. HPE is also one of a handful of partners working with Microsoft to bring the software defined private (and hybrid) Azure Stack cloud stack as an appliance to market.

    HPE acquisitions Dejavu or Something New?

    For some people there may be a sense of Dejavu of what HPE and its predecessors have previously acquired, developed, sold and supported into the market over years (and decades in some cases). What will be interesting to see is how the 3PAR (StoreServ) and Lefthand based (StoreVirtual) as well as ConvergedSystem 250-HC product lines are realigned to make way for Nimble and Simplivity.

    Likewise what will HPE do with MSA at the low-end, continue to leverage it for low-end and high-volume basic storage similar to Dell with the Netapp/Engenio powered MD series? Or will HPE try to move the Nimble down market and displace the MDS? What about in the mid-market, will Nimble be unleashed to replace StoreVirtual (e.g. Lefthand), or will they fence it in (e.g. being restricted to certain scenarios?
    Will the Nimble solution be allowed to move up market into the low-end of where 3PAR has been positioned, perhaps even higher up given its all flash capabilities. Or, will there be a 3PAR everywhere approach?

    Then there is Simplivity as the solution is effectively software running on an HPE server (or with other partners Cisco and Lenovo) along with a PCIe offload card (with Simplivity data services acceleration). Note that Simplivity leverages PCIe offload cards for some of their functionality, this too is familiar ground for HPE given its ASIC use by 3PAR.

    Simplivity has the potential to disrupt some low to mid-range, perhaps even larger opportunities that are looking to go to a converged infrastructure (CI) or HCI deployment as part of their data infrastructure needs. One can speculate that Simplivity after repackaging will be positioned along current HPE CI and HCI solutions.

    This will be interesting to watch to see if the HPE server and storage groups can converge not only from a technology point, also sales, marketing, service, and support perspective. With the Simplivity solution, HPE has an opportunity to move the industry thinking or perception that HCI is only for small environments defined by what some products can do.

    What I mean by this is that HPE with its enterprise and SMB along with SME and cloud managed service provider experience as well as servers can bring hyper-scale out (and up) converged to the market. In other words, start addressing the concern I hear from larger organizations that most CI or HCI solutions (or packaging) are just for smaller environments. HPE has the servers, they have the storage from MSAs to other modules and core data infrastructure building blocks along with the robustness of the Simplivity software to enable hyper-scale out CI.

    What about bulk, object, scale-out storage

    HPE has a robust tape business, yes I know tape is dead, however tell that to the customers who keep buying products providing revenue along with margin to HPE (and others). Likewise HPE has VTLs as well as other solutions for addressing bulk data (e.g. big data, backups, protection copies, archives, high volume, and large quantity, what goes on tape or object). For example HPE has the StoreOnce solution.

    However where is the HPE object storage story?

    Otoh, does HPE its own object storage software, simply partner with others? HPE can continue to provide servers along with underlying storage for other vendors bulk, cloud and object storage systems, and where needed, meet in the channel among other arrangements.

    On the other hand, this is where similar to Polyserve and Ibrix among others in the past have come into play, with HPE via its pathfinder (investment group) joining others in putting some money into Hedvig. HPE gets access to Hedvig for their scale out storage that can be used for bulk as well as other deployments including CI, HCI and CIB (e.g. something to sell HPE servers and storage with).

    HPE can continue to partner with other software providers and software-defined storage stacks. Keep in mind that Milan Shetti (CTO, Data Center Infrastructure Group HPE) is no stranger to these waters given his past at Ibrix among others.

    What About Hedvig

    Time to get back to Hedvig which is a storage startup whose software can run on various server storage platforms, as well as in different topologies. Different topologies include in a CI or HCI, Cloud, as well as scale out with various access including block, file and object. In addition to block, file and object access, Hedvig has interesting management tools, data services, along with support for VMware, Docker, and OpenStack among others.

    Recently Hedvig landed another $21.5M USD in funding bringing their total to about $52M USD. HPE via its investment arm, joins other investors (note HPE was part of the $21.5M, that was not the amount they invested) including Vertex, Atlantic Bridge, Redpoint, edbi and true ventures.

    What does this mean for HPE and Hedvig among others? Tough to say however easy to imagine how Hedvig could be leveraged as a partner using HPE servers, as well as for HPE to have an addition to their bulk, scale-out, cloud and object storage portfolio.

    Where to Learn More

    View more material on HPE, data infrastructure and related topics with the following links.

  • Cloud and Object storage are in your future, what are some questions?
  • PCIe Server Storage I/O Network Fundamentals
  • If NVMe is the answer, what are the questions?
  • Fixing the Microsoft Windows 10 1709 post upgrade restart loop
  • Data Infrastructure server storage I/O network Recommended Reading
  • Introducing Windows Subsystem for Linux WSL Overview
  • IT transformation Serverless Life Beyond DevOps with New York Times CTO Nick Rockwell Podcast
  • HPE Announces AMD Powered Gen 10 ProLiant DL385 For Software Defined Workloads
  • AWS Announces New S3 Cloud Storage Security Encryption Features
  • NVM Non Volatile Memory Express NVMe Place
  • Data Infrastructure Primer and Overview (Its Whats Inside The Data Center)
  • January 2017 Server StorageIO Update Newsletter
  • September and October 2016 Server StorageIO Update Newsletter
  • HP Buys one of the seven networking dwarfs and gets a bargain
  • Did HP respond to EMC and Cisco VCE with Microsoft Hyper-V bundle?
  • Give HP storage some love and short strokin
  • While HP and Dell make counter bids, exclusive interview with 3PAR CEO David Scott
  • Data Protection Fundamental Topics Tools Techniques Technologies Tips
  • Hewlett-Packard beats Dell, pays $2.35 billion for 3PAR
  • HP Moonshot 1500 software defined capable compute servers
  • What Does Converged (CI) and Hyper converged (HCI) Mean to Storage I/O?
  • What’s a data infrastructure?
  • Ensure your data infrastructure remains available and resilient
  • Object Storage Center, The SSD place and The NVMe place
  • Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What this all means

    Generally speaking I think this is a good series of moves for HPE (and their customers) as long as they can execute in all dimensions.

    Let’s see how they execute, and by this, I mean more than simply executing or terminating staff from recently acquired or earlier acquisitions. How will HPE craft go to the market message that leverages the portfolio to compete and hold or take share from other vendors, vs. cannibalize across its own lines (e.g. revenue prevention)? With that strategy and message, how will HPE assure existing customers will be taken care, be given a definite upgrade and migration path vs. giving them a reason to go elsewhere.

    Hopefully HPE unleashes the full potential of Simplivity and Nimble along with 3PAR, XP7 where needed, along with MSA at low-end (or as part of volume scale-out with servers for software defined), to mention sever portfolio. For now, this tells me that HPE is still interested in maintaining, expanding their data infrastructure business vs. simply retrenching selling off assets. Thus this looks like HPE is interested in continuing to invest in data infrastructure technologies including buying into server, storage I/O network, hardware, software solutions, while not simply clinging to what they already have, or previously bought.

    Everything is not the same in data centers and across data infrastructure, so why have a one size fits all approach for organization as large, diverse as HPE.

    Congratulations and best wishes to the folks at Hedvig, Nimble, Simplivity.

    Now, lets see how this all plays out.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    February 2017 Server StorageIO Update Newsletter

    Server and StorageIO Update Newsletter

    Volume 17, Issue II

    Hello and welcome to the February 2017 issue of the Server StorageIO update newsletter.

    With world backup (and recovery) day coming up on March 31, it makes sense to plan, review, assess, remediate, test and prepare in advance, to avoid or prevent a disaster later. Some of the themes in this months newsletter thus have a data protection angle which includes availability, resiliency, security, and backup/restore along with associated topics. Keep in mind that there are many aspects to data protection, along with various tools, technologies, techniques along with tradecraft skills (experience).

    Speaking of tradecraft, the tips section has been expanded with more content to help refresh, or expand your fundamental data infrastructure skills and experiences. Watch for more about trade craft in future newsletters as well as elsewhere.

    Speaking of data protection, if you had not heard or forgot, some recent events included the Australian Tax Office (ATO) whose resiliency solution appears to not have been configured for, well, availability, resiliency along with durability. You can read more about the ATO, lessons learned as well as fall out doing a Google search such as "australian tax office disaster". Another recent disaster or disruption was Gitlab (not to be confused with Github) that lost around 300GB of data. Google something like "gitlab disaster" to see more.
    In the case of Gitlab, it seems that a DevOp admin accidentally did something like a rm -rf (e.g. recursive and force) that if you know what that means, you know it might not be good.

    As is the case with many disasters or near disasters and disruptions, they are usually the result of a chain of events, thus the mantra or isolate, contain faults to prevent snowballing into something worse. What’s concerning about Gitlab is that there are decades of lessons to be learned, known and preventable.

    Hopefully Gitlabs experiences will prompt others in or moving to so-called platform 3 or new DevOps environment to use things in new ways, as well as prevent old problems using known tradecraft skills, lessons, experiences.
    Also keep in mind that while technology can and will fail, hardware and software including clouds are defined by people, and when people are involved, human error is also present.

    In This Issue

  • Server StorageIO News Commentary
  • Trade craft Articles, Tips & Tricks Topics
  • Server StorageIOblog posts
  • Various Events and Webinars
  • IT Industry Activity Trends
  • Industry Resources and Links
  • Connect and Converse With Us
  • About Us
  • Enjoy this edition of the Server StorageIO update newsletter.

    Cheers GS

    Data Infrastructure and IT Industry Activity Trends

    Some recent Industry Activities, Trends Announcements

    Cloud and object storage vendor Cloudian announced a new appliance (e.g. tin-wrapped software) that they claim give high density low (cloud service like) pricing.

    Check out iosafe who has a line of fire (and water) proof NAS and Windows Servers as part of availability data protection that can compliment clouds. For those of you who are also Synology fans (or users) take a look at what iosafe is doing for consumer, SOHO, ROBO, workgroup, SMB among other environments.

    Speaking of data protection, how are you going about wiping or digital bleaching your storage including nand flash SSDs? Particularly are you doing deep cleaning including those hard to reach persistent non-volatile memory (NVM) cell locations in SSDs? Check out what Blancco is doing for deep cleaning to wipe or digital bleach your storage including SSDs. Another aspect of data protection includes after your physical assets have been wiped clean (e.g. digital bleach), how will you safely dispose of the items? That’s where various vendors such as OceanTech among others come into play.

    Server StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past StorageIOblog posts here

    Server StorageIO Commentary in the news

    Recent Server StorageIO industry trends perspectives commentary in the news.

    Via SearchDataCenter: New options to evolve your data backup and recovery plan
    Via SmallBusinessComputing: Easy Storage for the Little Guy: Has the Time Come?
    Via InfoStor: 10 More Top Data Storage Applications
    Via Infostor: 10 Top Data Storage Applications

    View more Server, Storage and I/O trends and perspectives comments here

    Various Tips, Tools, Technology and Tradecraft Topics

    Recent Data Infrastructure Tradecraft Articles, Tips, Tools, Tricks and related topics.

    Via IDG/NetworkWorld:  Whats a data infrastructure?
    Via Computerweekly:  NVMe: What to use, PCIe card vs U.2 and M.2
    Via InfoStor:  Cloud Storage Concerns, Considerations and Trends
    Via InfoStor:  SSD Trends, Tips and Topics

    Check out Neil Anderson(@flackboxtv) flackbox.com site to view various video and tutorials about NetApp, Cisco along with VMware among others. Sharpen your data infrastructure server storage I/O tradecraft skills with the various labs and simulators that Neil has covered.

    Speaking of tradecraft skills and experience development, check out VMware Staff Architect William Lam (@lamw) virtuallyghetto.com site for a news software defined data center (SDDC) lab. This new lab focuses on automated deployment for vSphere 6.0u2 along with vSphere 6.5. In other related news, VMware has made generally available (GA) vSphere 6.0 Update 3 including enhancements to vSAN and vCenter. View more details here at Duncan Epping (@DuncanYB) of VMware Yellow Bricks site.

    If you are interested in Microsoft Azure, check out this piece on SQL Server failover clustering, along with other Windows Server, Hyper-V, Nano, Powershell and related topics here. Want to build a software defined data center (SDDC) or software-defined data infrastructure (SDDI) based on Microsoft Windows Server, Hyper-V and related technologies, check out this Github lab as well as this one for S2D among others.

    View more tips and articles here

    Events and Activities

    Recent and upcoming event activities.

    April 3-7, 2017 – Seminars – Dutch workshop seminar series – Nijkerk Netherlands

    March 15, 2017 – Webinar – SNIA/BrightTalkHyperConverged and Storage – 10AM PT

    January 26 2017 – Seminar – Presenting at Wipro SDx Summit London UK

    January 11, 2017 Webinar – Redmond Magazine
    Dell Software – Presenting – Tailor Your Backup Data Repositories to Fit Your Needs

    See more webinars and activities on the Server StorageIO Events page here.

    Server StorageIO Industry Resources and Links

    Useful links and pages:
    Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
    storageio.com/links – Various industry links (over 1,000 with more to be added soon)
    objectstoragecenter.com – Cloud and object storage topics, tips and news items
    OpenStack.org – Various OpenStack related items
    storageio.com/protect – Various data protection items and topics
    thenvmeplace.com – Focus on NVMe trends and technologies
    thessdplace.com – NVM and Solid State Disk topics, tips and techniques
    storageio.com/performance – Various server, storage and I/O benchmark and tools
    VMware Technical Network – Various VMware related items

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved