Gaining Server Storage I/O Insight into Microsoft Windows Server 2016

Server Storage I/O Insight into Microsoft Windows Server 2016

server storage I/O trends
Updated 12/8/16

In case you had not heard, Microsoft announced the general availability (GA, also known as Release To Manufacturing (RTM) ) of the newest version of its Windows server operating system aka Windows Server 2016 along with System Center 2016. Note that as well as being released to traditional manufacturing distribution mediums as well as MSDN, the Windows Server 2016 bits are also available on Azure.

Microsoft Windows Server 2016
Windows Server 2016 Welcome Screen – Source Server StorageIOlab.com

For some this might be new news, or a refresh of what Microsoft announced a few weeks ago (e.g. the formal announcement). Likewise, some of you may not be aware that Microsoft is celebrating WIndows Server 20th Birthday (read more here).

Yet for others who have participated in the public beta aka public technical previews (TP) over the past year or two or simply after the information coming out of Microsoft and other venues, there should not be a lot of surprises.

Whats New With Windows Server 2016

Microsoft Windows Server 2016 Desktop
Windows Server 2016 Desktop and tools – Source Server StorageIOlab.com

Besides a new user interface including visual GUI and Powershell among others, there are many new feature functionalities summarized below:

  • Enhanced time-server with 1ms accuracy
  • Nano and Windows Containers (Linux via Hyper-V)
  • Hyper-V enhanced Linux services including shielded VMs
  • Simplified management (on-premisess and cloud)
  • Storage Spaces Direct (S2D) and Storage Replica (SR) – view more here and here


Storage Replica (SR) Scenarios including synchronous and asynchronous – Via Microsoft.com

  • Resilient File System aka ReFS (now default file system) storage tiering (cache)
  • Hot-swap virtual networking device support
  • Reliable Change Tracking (RCT) for faster Hyper-V backups
  • RCT improves resiliency vs. VSS change tracking
  • PowerShell and other management enhancements
  • Including subordinated / delegated management roles
  • Compliment Azure AD with on premise AD
  • Resilient/HA RDS using Azure SQL DB for connection broker
  • Encrypted VMs (at rest and during live migration)
  • AD Federation Services (FS) authenticate users in LDAP dir.
  • vTPM for securing and encrypting Hyper-V VMs
  • AD Certificate Services (CS) increase support for TPM
  • Enhanced TPM support for smart card access management
  • AD Domain Services (DS) security resiliency for hybrid and mobile devices

Here is a Microsoft TechNet post that goes into more detail of what is new in WIndows Server 2016.

Free ebook: Introducing Windows Server 2016 Technical Preview (Via Microsoft Press)

Check out the above free ebook, after looking through it, I recommend adding it to your bookshelf. There are lots of good intro and overview material for Windows Server 2016 to get you up to speed quickly, or as a refresh.

Storage Spaces Direct (S2D) CI and HCI

Storage Spaces Direct (S2D) builds on Storage Spaces that appeared in earlier Windows and Windows Server editions. Some of the major changes and enhancements include ability to leverage local direct attached storage (DAS) such as internal (or external) dedicated NVMe, SAS and SATA HDDs as well as flash SSDs that used for creating software defined storage for various scenarios.

Scenarios include converged infrastructure (CI) disaggregated as well as aggregated hyper-converged infrastructure (HCI) for Hyper-V among other workloads. Windows Server 2016 S2D nodes communicate (from a storage perspective) via a software storage bus. Data Protection and availability is enabled between S2D nodes via Storage Replica (SR) that can do software based synchronous and asynchronous replication.


Aggregated – Hyper-Converged Infrastructure (HCI) – Source Microsoft.com


Desegregated – Converged Infrastructure (CI) – Source Microsoft.com

The following is a Microsoft produced YouTube video providing a nice overview and insight into Windows Server 2016 and Microsoft Software Defined Storage aka S2D.




YouTube Video Storage Spaces Direct (S2D) via Microsoft.com

Server storage I/O performance

What About Performance?

A common question that comes up with servers, storage, I/O and software defined data infrastructure is what about performance?

Following are some various links to different workloads showing performance for Hyper-V, S2D and Windows Server. Note as with any benchmark, workload or simulation take them for what they are, something to compare that may or might not be applicable to your own workload and environments.

  • Large scale VM performance with Hyper-V and in-memory transaction processing (Via Technet)
  • Benchmarking Microsoft Hyper-V server, VMware ESXi and Xen Hypervisors (Via cisjournal PDF)
  • Server 2016 Impact on VDI User Experience (Via LoginVSI)
  • Storage IOPS update with Storage Spaces Direct (Via TechNet)
  • SQL Server workload (benchmark) Order Processing Benchmark using In-Memory OLTP (Via Github)
  • Setting up testing Windows Server 2016 and S2D using virtual machines (Via MSDN blogs)
  • Storage throughput with Storage Spaces Direct (S2D TP5 (Via TechNet)
  • Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)

Where To Learn More

For those of you not as familiar with Microsoft Windows Server and related topics, or that simply need a refresh, here are several handy links as well as resources.

  • Introducing Windows Server 2016 (Free ebook from Microsoft Press)
  • What’s New in Windows Server 2016 (Via TechNet)
  • Microsoft S2D Software Storage Bus (Via TechNet)
  • Understanding Software Defined Storage with S2D in Windows Server 2016 (Via TechNet)
  • Microsoft Storage Replica (SR) (Via TechNet)
  • Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)
  • Microsoft Windows S2D Software Defined Storage (Via TechNet)
  • Windows Server 2016 and Active Directory (Redmond Magazine Webinar)
  • Data Protection for Modern Microsoft Environments (Redmond Magazine Webinar)
  • Resilient File System aka ReFS (Via TechNet)
  • DISKSPD now on GitHub, and the mysterious VMFLEET released (Via TechNet)
  • Hyper-converged solution using Storage Spaces Direct in Windows Server 2016 (Via TechNet)
  • NVMe, SSD and HDD storage configurations in Storage Spaces Direct TP5 (Via TechNet)
  • General information about SSD at www.thessdplace.com and NVMe at www.thenvmeplace.com
  • How to run nested Hyper-V and Windows Server 2016 (Via Altaro and via MSDN)
  • How to run Nested Windows Server and Hyper-V on VMware vSphere ESXi (Via Nokitel)
  • Get the Windows Server 2016 evaluation bits here
  • Microsoft Azure Stack overview and related material via Microsoft
  • Introducing Windows Server 2016 (Via MicrosoftPress)
  • Various WIndows Server and S2D lab scripts (Via Github)
  • Storage Spaces Direct – Lab Environment Setup (Via Argon Systems)
  • Setting up S2D with a 4 node configuration (Via StarWind blog)
  • SQL Server workload (benchmark) Order Processing Benchmark using In-Memory OLTP (Via Github)
  • Setting up testing Windows Server 2016 and S2D here using virtual machines (Via MSDN blogs)
  • Hyper-V large-scale VM performance for in-memory transaction processing (Via Technet)
  • BrightTalk Webinar – Software-Defined Data Centers (SDDC) are in your Future (if not already here)
  • Microsoft TechNet: Understand the cache in Storage Spaces Direct
  • BrightTalk Weibniar – Software-Defined Data Infrastructures Enabling Software-Defined Data Centers
  • Happy 20th Birthday Windows Server, ready for Server 2016?
  • Server StorageIO resources including added links, tools, reports, events and more.

What This All Means

While Microsoft Windows Server recently celebrated its 20th birthday (or anniversary), a lot has changed as well as evolved. This includes Windows Servers 2016 supporting new deployment and consumption models (e.g. lightweight Nano, full data center with desktop interface, on-premises, bare metal, virtualized (Hyper-V, VMware, etc) as well as cloud). Besides how consumed and configured, which can also be for CI and HCI modes, Windows Server 2016 along with Hyper-V extend the virtualization and container capabilities into non-Microsoft environments specifically around Linux and Docker. Not only are the support for those environments and platforms enhanced, so to are the management capabilities and interfaces from Powershell to Bash Linux shell being part of WIndows 10 and Server 2016.

What this all means is that if you have not looked at Windows Server in some time, its time you do, even if you are not a WIndows or Microsoft fan, you will want to know what it is that has been updated (perhaps even update your fud if that is the case) to stay current. Get your hands on the bits and try Windows Server 2016 on a bare metal server, or as a VM guest, or via cloud including Azure, or simply leverage the above resources to learn more and stay informed.

Ok, nuff said, for now…

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) and twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

EMCworld 2016 EMC Hybrid and Converged Clouds Your Way

EMCworld 2016 EMC Hybrid and Converged Clouds Your Way

server storage I/O trends

This is a quick post looking at a high-level view of today’s EMCworld 2016 announcements.

Following up from yesterdays post covering the set of announcements, today’s theme is around Hybrid, Converged and Clouds your way. In addition to the morning announcements, EMC also yesterday afternoon announced InfoArchive 4.0 and EMC LEAP cloud native content applications for Enterprise Content Management (ECM). However lets focus on today’s announcements with a focus of modernize, transform and automate your date center.

Today’s announcements include:

  • Cloud solution portfolio enhancements with Native Hybrid Cloud (NHC) turnkey developer platform for cloud native application development. NHC editions include those for VMware vSphere, OpenStack and VMware Photo Platform. Read more here.

  • VCE VxRack System 1000 with new Neutrino Nodes which are software defined hyper-converged rack scale solutions to support turnkey cloud (public, private, hybrid) implementations. Read more about VxRack System 1000 with links here.

  • NVMe based DSSD D5 flash SSD system enhancements include ability to stripe two systems together in a single rack to double the IOPs, bandwidth and capacity. Also new is a VCE VxRack system with DSSD. Read more about DSSD D5 enhancements here.

Some Hardware That Gets Software Defined

Rear view of EMC Neutrion node

Rear view of EMC Neutrino node

Where To Learn More

  • Session Streaming For video of keynotes, general sessions, backstage sessions, and EMC TV coverage, click here
  • Social: Follow @EMCWorld,  @EMCCorp, @EMC_News and @EMCStorage, and join conversations with  #EMCWORLD, and like EMC on Facebook
  • Photos: Access event photos via  Flickr and EMC Pulse Blog or visit the special EMC World News microsite here
  • Reflections: Read Core Technologies President, Guy Churchward’s Reflections post on today’s announcements here
  • Visit the EMC Store, the EMC Community Network Site and The Core Blog

What This All Means

For those of you who have installed OpenStack either from scratch, or using one of the appliances, you understand what’s involved with doing so. The point is that for those who are in the business or jobs are based on installing or configuring or software defining the software and cloud configurations, turnkey solutions may not be a fit, or at least yet. On the other hand, if your focus is doing other things and are looking for boosting productivity, then turnkey solutions are a way of fast tracking deployment. Likewise for those who have the need for more speed from bandwidth or IOPs, the DSSD D5 enhancements will help in those environments.

Ok, nuff said

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

vSphere Software Defined Beta, Something for free from VMware

vSphere Beta, Something free from VMware (other than your time)

server storage I/O trends

Something free from VMware (other than time)

VMware is looking for candidate beta test sites and environments for an upcoming vSphere release. Target audience or environments are those who have deployed vSphere 5.5 and 6.0 in your environment and looking to test the new software (e.g. bits).

What VMware is looking for

For this private community

vSphere beta, VMware is looking for participants with expectations including:

  • Online acceptance of the Master Software Beta Test Agreement will be required prior to visiting the Private Beta Community
  • Install beta software within 3 days of receiving access to the beta product
  • Provide feedback within the first 4 weeks of the beta program
  • Submit Support Requests for bugs, issues and feature requests
  • Complete surveys and beta test assignments
  • Participate in the private beta discussion forum and conference calls

How to get involved and test the bits?

To get involved (and get the bits), simply fill out the VMware form found here (no credit card or money required, just some of your time).

The VMware vSphere team will grant access to the program to selected candidates in stages. This vSphere Beta Program leverages a private Beta community to download software and share information. VMware will provide discussion forums, webinars, and service requests to enable you to share your opinion with them.

VMware cites the following reasons to participate in this vSphere beta opportunity:

  • Receive early access to the vSphere Beta products
  • Interact with the vSphere Beta team consisting of Product Managers, Engineers, Technical Support, and Technical Writers
  • Provide direct input on product functionality, configurability, usability, and performance
  • Provide feedback influencing future products, training, documentation, and services
  • Collaborate with other participants, learn about their use cases, and share advice and learnings

Where To Learn More

What This All Means

Having been involved in earlier vSphere betas this is a great way to get an early glimpse and hands on behind the wheel real-world experience with new technology for the experience, as well as testing to see how things will work in yours, or in a VMware hosted environment. You are free to use and test the bits (e.g. software) in your environment (or VMware hosted) how you like in a free-form real-world way. In addition to hands on time, you also get exposure and chance to interact with the VMware folks.

This experience can be useful for planning on how to use new feature functionality, as well as strategy planning for deployment once the production bits get released down the road.

Remember to sign up if interested here, see you in the beta.

Ok, nuff said, for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

NVMe Place NVM Non Volatile Memory Express Resources

Updated 8/31/19
NVMe place server Storage I/O data infrastructure trends

Welcome to NVMe place NVM Non Volatile Memory Express Resources. NVMe place is about Non Volatile Memory (NVM) Express (NVMe) with Industry Trends Perspectives, Tips, Tools, Techniques, Technologies, News and other information.

Disclaimer

Please note that this NVMe place resources site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks.

NVM Express Organization
Image used with permission of NVM Express, Inc.

Visit the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

 

The NVMe Place resources and NVM including SCM, PMEM, Flash

NVMe place includes Non Volatile Memory (NVM) including nand flash, storage class memories (SCM), persistent memories (PM) are storage memory mediums while NVM Express (NVMe) is an interface for accessing NVM. This NVMe resources page is a companion to The SSD Place which has a broader Non Volatile Memory (NVM) focus including flash among other SSD topics. NVMe is a new server storage I/O access method and protocol for fast access to NVM based storage and memory technologies. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.

Server Storage I/O NVMe PCIe SAS SATA AHCI
Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.

Leveraging the standard PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5″ drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add-in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially, NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.

NVMe as back-end storage
NVMe as a “back-end” I/O interface for NVM storage media

NVMe as front-end server storage I/O interface
NVMe as a “front-end” interface for servers or storage systems/appliances

NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can also being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.

NVMe features

Main features of NVMe include among others:

  • Lower latency due to improve drivers and increased queues (and queue sizes)
  • Lower CPU used to handle larger number of I/Os (more CPU available for useful work)
  • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
  • Bandwidth improvements leveraging various fast PCIe interface and available lanes
  • Dual-pathing of devices like what is available with dual-path SAS devices
  • Unlock the value of more cores per processor socket and software threads (productivity)
  • Various packaging options, deployment scenarios and configuration options
  • Appears as a standard storage device on most operating systems
  • Plug-play with in-box drivers on many popular operating systems and hypervisors

Shared external PCIe using NVMe
NVMe and shared PCIe (e.g. shared PCIe flash DAS)

NVMe related content and links

The following are some of my tips, articles, blog posts, presentations and other content, along with material from others pertaining to NVMe. Keep in mind that the question should not be if NVMe is in your future, rather when, where, with what, from whom and how much of it will be used as well as how it will be used.

  • How to Prepare for the NVMe Server Storage I/O Wave (Via Micron.com)
  • Why NVMe Should Be in Your Data Center (Via Micron.com)
  • NVMe U2 (8639) vs. M2 interfaces (Via Gamersnexus)
  • Enmotus FuzeDrive MicroTiering (StorageIO Lab Report)
  • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
  • Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
  • NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
  • Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
  • Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
  • MNVM Express solutions (Via SuperMicro)
  • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 (Via StorageIOblog)
  • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
  • RoCE updates among other items (Via InfiniBand Trade Association (IBTA) December Newsletter)
  • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
  • What should I consider when using SSD cloud? (Via SearchCloudStorage)
  • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips)– PDF
  • Selecting Storage: Start With Requirements (Via NetworkComputing)
  • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
  • Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
  • Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
  • How many IOPS can a HDD, HHDD or SSD do (Part I)?
  • How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
  • I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
  • Via EnterpriseStorageForum: 5 Hot Storage Technologies to Watch
  • Via EnterpriseStorageForum: 10-Year Review of Data Storage

Non-Volatile Memory (NVM) Express (NVMe) continues to evolve as a technology for enabling and improving server storage I/O for NVM including nand flash SSD storage. NVMe streamline performance enabling more work to be done (e.g. IOPs), data to be moved (bandwidth) at a lower response time using less CPU.

NVMe and SATA flash SSD performance

The above figure is a quick look comparing nand flash SSD being accessed via SATA III (6Gbps) on the left and NVMe (x4) on the right. As with any server storage I/O performance comparisons there are many variables and take them with a grain of salt. While IOPs and bandwidth are often discussed, keep in mind that with the new protocol, drivers and device controllers with NVMe that streamline I/O less CPU is needed.

Additional NVMe Resources

Also check out the Server StorageIO companion micro sites landing pages including thessdplace.com (SSD focus), data protection diaries (backup, BC/DR/HA and related topics), cloud and object storage, and server storage I/O performance and benchmarking here.

If you are in to the real bits and bytes details such as at device driver level content check out the Linux NVMe reflector forum. The linux-nvme forum is a good source if you are developer to stay up on what is happening in and around device driver and associated topics.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

Disclaimer

Disclaimer: Please note that this site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks. Check out the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

NVM Express Organization
Image used with permission of NVM Express, Inc.

Wrap Up

Watch for updates with more content, links and NVMe resources to be added here soon.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

VMware VVOLs storage I/O fundementals (Part 1)

VMware VVOL’s storage I/O fundamentals (Part I)

Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g.VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).

Some of you may already be participating in the VMware beta of VVOL involving one of the initial storage vendors also in the beta program.

Ok, now let’s go a bit deeper, however if you want some good music to listen to while reading this, check out @BruceRave GoDeepMusic.Net and shows here.

Taking a step back, digging deeper into Storage I/O and VVOL’s fundamentals

Instead of a VM host accessing its virtual disk (aka VMDK) which is stored in a VMFS formatted data store (part of ESXi hypervisor) built on top of a SCSI LUN (e.g. SAS, SATA, iSCSI, Fibre Channel aka FC, FCoE aka FC over Ethernet, IBA/SRP, etc) or an NFS file system presented by a storage system (or appliance), VVOL’s push more functionality and visibility down into the storage system. VVOL’s shift more intelligence and work from the hypervisor down into the storage system. Instead of a storage system simply presenting a SCSI LUN or NFS mount point and having limited (coarse) to no visibility into how the underlying storage bits, bytes as well as blocks are being used, storage systems gain more awareness.

Keep in mind that even files and objects still get ultimately mapped to pages and blocks aka sectors even on nand flash-based SSD’s. However also keep an eye on some new technology such as the Seagate Kinetic drive that instead of responding to SCSI block based commands, leverage object API’s and associated software on servers. Read more about these emerging trends here and here at objectstoragecenter.com.

With a normal SCSI LUN the underlying storage system has no knowledge of how the upper level operating system, hypervisor, file system or application such as a database (doing raw IO) is allocating the pages or blocks of memory aka storage. It is up to the upper level storage and data management tools to map from objects and files to the corresponding extents, pages and logical block address (LBA) understood by the storage system. In the case of a NAS solution, there is a layer of abstractions placed over the underlying block storage handling file management and the associated file to LBA mapping activity.

Storage I/O basics
Storage I/O and IOP basics and addressing: LBA’s and LBN’s

Getting back to VVOL, instead of simply presenting a LUN which is essentially a linear range of LBA’s (think of a big table or array) that the hypervisor then manages data placement and access, the storage system now gains insight into what LBA’s correspond to various entities such as a VMDK or VMX, log, clone, swap or other VMware objects. With this more insight, storage systems can now do native and more granular functions such as clone, replication, snapshot among others as opposed to simply working on a coarse LUN basis. The similar concepts extend over to NAS NFS based access. Granted, there are more to VVOL’s including ability to get the underlying storage system more closely integrated with the virtual machine, hypervisor and associated management including supported service manage and classes or categories of service across performance, availability, capacity, economics.

What about VVOL, VAAI and VASA?

VVOL’s are building from earlier VMware initiatives including VAAI and VASA. With VAAI, VMware hypervisor’s can off-load common functions to storage systems that support features such as copy, clone, zero copy among others like how a computer can off-load graphics processing to a graphics card if present.

VASA however provides a means for visibility, insight and awareness between the hypervisor and its associated management (e.g. vCenter etc) as well as the storage system. This includes storage systems being able to communicate and publish to VMware its capabilities for storage space capacity, availability, performance and configuration among other things.

With VVOL’s VASA gets leveraged for unidirectional (e.g. two-way) communication where VMware hypervisor and management tools can tell the storage system of things, configuration, activities to do among others. Hence why VASA is important to have in your VMware CASA.

What’s this object storage stuff?

VVOL’s are a form of object storage access in that they differ from traditional block (LUN’s) and files (NAS volumes/mount points). However, keep in mind that not all object storage are the same as there are object storage access and architectures.

object storage
Object Storage basics, generalities and block file relationships

Avoid making the mistake of when you hear object storage that means ANSI T10 (the folks that manage the SCSI command specifications) Object Storage Device (OSD) or something else. There are many different types of underlying object storage architectures some with block and file as well as object access front ends. Likewise there are many different types of object access that sit on top of object architectures as well as traditional storage system.

Object storage I/O
An example of how some object storage gets accessed (not VMware specific)

Also keep in mind that there are many different types of object access mechanism including HTTP Rest based, S3 (e.g. a common industry defacto standard based on Amazon Simple Storage Service), SNIA CDMI, SOAP, Torrent, XAM, JSON, XML, DICOM, IL7 just to name a few, not to mention various programmatic bindings or application specific implementations and API’s. Read more about object storage architectures, access and related topics, themes and trends at www.objecstoragecenter.com

Lets take a break here and when you are ready, click here to read the third piece in this series VMware VVOL’s and storage I/O fundamentals Part 2.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Matt Vogt of Computex talks VMware vCOPs in his first ever podcast

Storage I/O trends

Matt Vogt of Computex talks VMware vCOPs in his first ever podcast

audio

In this episode from the Computex Rethink your Datacenter for 2017 planning and strategy event I am joined by Matt Vogt (@MattVogt).

Introducing Matt Vogt

Matt is a Principal Architect with Computex Technology Solutions as well as certified VMware specialist and fellow vExpert.

audio

Not only is this the first appearance by Matt on the StorageIO Podcasts, it is also his first time as a guest on any podcast so I’m honored to host his global debut podcast appearance here.

We talk about the role of automation for performance and capacity optimization along with how VMware vCOPs plays an important role.

Listen in to learn more about how to gain insight and situational awareness to make informed decisions for your data infrastructure environment with Matt.

Check out Matt’s blog here at blog.mattvogt.net and listen in to the pod cast here.

Also available via 

Ok, nuff said.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

iVMcontrol iPhone VMware management, iTool or iToy?

Storage I/O trends

iVMcontrol iPhone VMware management, iTool or iToy?

A few months back I was looking for a simple easy to use yet robust tool for accessing and managing my VMware environment from my iPhone. The reason being is that I don’t always like to carry a laptop or tablet around, not to mention neither fits in a pocket very well. Needless to say there are many options for accessing VMware products and implementations that run on tablets including iPads as well as laptops among others.

Why do I need iVMcontrol

I wanted something that I could quickly access and check on a VM guest, start or stop things, gain status updates if or when needed from my iPhone. Also keeping in mind that this would be a tool that would not be used constantly throughout the day, maybe at best one or twice a week, hence needed to be affordable as well. At $9.99 USD the tool I found and selected (iVMcontrol) was not for free, however I have gotten that value out of the tool already in just a few months of having it.

As mentioned, the tool is iVMcontrol which you can get from the iTunes store (here’s the link).

Storage I/O IVM on iPhone
View of iVMcontrol from iPhone

Granted iVMcomtrol is not the same as other app’s for full-sized tablets or laptops, however for an iPhone it’s not bad! In fact other than a few nuances namely using a virtual mouse, it’s pretty good for what I use it for.

That’s the key is that while I use the vSphere client or vCenter Browser for real activities, iVMcontrol served a different purpose. That purpose is for example if I just need to check on something or do basic functions without having to get the laptop out or something else.  Even in the lab if I’m making a change or need to start or stop things and forget the laptop in another room, no worries simply use the iPhone.

Sure using a tablet would be easier, however I usually don’t care a tablet in my pocket.

How often do I use iVMcontrol?

Depends however usually a couple of times a week depending on what I’m doing.

For example if I need to quickly check on a guest VM, start or stop something, or general status check iVMcontrol has come in handy.

Storage I/O IVM main screen
Various VMware hosts (PM’s) in a VMware datacenter

Storage I/O IVM main screen
Various Guest VMs on VMware host (PM)

iVM VMware storage I/O space
VMware host storage space capacity usage

Storage I/O IVM main screen
Managing a guest VM

iVM Windows guest
Accessing Windows Guest VM via iVMcontrol

iVM Windows guest storage I/O activity
Checking on Windows Guest Storage I/O activity

As you can see the screen is small, sure you can zoom in thus good for checking in on activity, or doing basic things. However for more involved activity, that’s where a tablet or regular computer comes into play accessing the VM guests, or VMware using the vSphere Client or vCenter web client type tools.

Is iVMcontrol an iTool or iToy?

IMHO its a tool, granted its also a fun toy.

Is a tool such as iVMcontrol a necessity or a nice to have for when I need to use it to check on something quick.

That depends on what you need vs. wants.

For me, it is a convince tool to have when I need it, however just because I have it does not mean I have to use it all the time.

Ok, nuff said (for now)

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Server virtualization nested and tiered hypervisors

Storage I/O trends

Server virtualization nested and tiered hypervisors

A few years ago I did a piece (click here) about the then emerging trend of tiered hypervisors, particular using different products or technologies in the same environment.

Tiered snow tools
Tiered snow management tools and technologies

Tiered hypervisors can be as simple as using different technologies such as VMware vSphere/ESXi, Microsoft Hyper-V, KVM or Xen in your environment on different physical machines (PMs) for various business and application purposes. This is similar to having different types or tiers of technology including servers, storage, networks or data protection to meet various needs.

Another aspect is nesting hypervisors on top of each other for testing, development and other purposes.

nested hypervisor

I use nested VMware ESXi for testing various configurations as well as verifying new software when needed, or creating a larger virtual environment for functionality simulations. If you are new to nesting which is running a hypervisor on top of another hypervisor such as ESXi on ESXi or Hyper-V on ESXi here are a couple of links to get you up to speed. One is a VMware knowledge base piece, two are from William Lam (@lamw) Virtual Ghetto (getting started here and VSAN here) and the other is from Duncan Epping @DuncanYB Yellow Bricks sites.

Recently I did a piece over at FedTech titled 3 Tips for Maximizing Tiered Hypervisors that looks at using multiple virtualization tools for different applications and how they can give a number of benefits.

Here is an excerpt:

Tiered hypervisors can be run in different configurations. For example, an agency can run multiple server hyper­visors on the same physical blade or server or on separate servers. Having different tiers or types of hypervisors for server and desktop virtualization is similar to using multiple kinds of servers or storage hardware to meet different needs. Lower-cost hypervisors may have lacked some functionality in the past, but developers often add powerful new capabilities, making them an excellent option.

IT administrators who are considering the use of tiered or multiple hypervisors should know the answers to these questions:

  • How will the different hypervisors be managed?
  • Will the environment need new management tools for backup, monitoring, configuration, provisioning or other routine functions?
  • Do existing tools offer support for different hypervisors?
  • Will the hypervisors have dedicated PMs or be nested?
  • How will IT migrate virtual machines and their guests between different hypervisors? For example if using VMware and Hyper-V, will you use VMware vCenter Multi-Hypervisor Manager or something similar?

So how about it, how are you using and managing tiered hypervisors?

Ok, nuff said for now.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

HDS Mid Summer Storage and Converged Compute Enhancements

Storage I/O trends

Converged Compute, SSD Storage and Clouds

Hitachi Data Systems (HDS) announced today several enhancements to their data storage and unified compute portfolio as part of their Maximize I.T. initiative.

Setting the context

As part of setting the stage for this announcement, HDS has presented the following strategy vision as part their vision for IT transformation and cloud computing.

https://hds.com/solutions/it-strategies/maximize-it.html?WT.ac=us_hp_flash_r11

What was announced

This announcement builds on earlier ones around HDS Unified Storage (HUS) primary storage using nand flash MLC Solid State Devices (SSD) and Hard Disk Drives (HDD’s), along with unified block and file (NAS), as well Unified Compute Platform (UCP) also known as converged compute, networking, storage and software. These enhancements follow recent updates to the HDS Content Platform (HCP) for object, file and content storage.

There are three main focus areas of the announcement:

  • Flash SSD storage enhancements for HUS
  • Unified with enhanced file (aka BlueArc based)
  • Enhanced unified compute (UCP)

HDS Flash SSD acceleration

The question should not be if SSD is in your future, rather when, where, with what and how much will be needed.

As part of this announcement, HDS is releasing an all flash SSD based HUS enterprise storage system. Similar to what other vendors have done, HDS is attaching flash SSD storage to their HUS systems in place of HDD’s. Hitachi has developed their own SSD module announced in 2012 (read more here). The HDS SSD module use Multi Level Cell (MLC) nand flash chips (dies) that now supports 1.6TB of storage space capacity unit. This is different from other vendors who either use nand flash SSD drive form factor devices (e.g. Intel, Micron, Samsung, SANdisk, Seagate, STEC (now WD), WD among others) or, PCIe form factor cards (e.g. FusionIO, Intel, LSI, Micron, Virident among others) or, attach a third-party external SSD device (e.g. IBM/TMS, Violin, Whiptail etc.).

Like some other vendors, HDS has also done more than simply attach a SSD (drive, PCIe card, or external device) to their storage systems calling it an integrated solution. What this means is that HDS has implemented software or firmware changes into their storage systems to manage durability and extend flash duty cycles caused by program erase (P/E) cycle wear. In addition HDS has implemented performance optimization in their storage systems to leverage the faster SSD modules, after all, faster storage media or devices need fast storage systems or controllers.

While the new all flash storage system can be initially bought with just SSD, similar to other hybrid storage solutions, hard disk drives (HDD’s) can also be installed. For enabling full performance at low latency, HDS is addressing both the flash SSD modules as well as the storage systems they attach to including back-end, front-end and caching in-between.

The release enables 500,000 or half a million IOPS (no IOP size, reads or writes, random or sequential. Future firmware (non-disrupted) to enable higher performance that HDS is claiming will be 1,000,000 IOPS at under a millisecond) were indicated.

In addition to future performance improvements, HDS is also indicating increased storage space capacity of its MLC flash SSD modules (1.6TB today). Using 12 modules (1.6TB each), 154TB of flash SSD can be placed in a single rack.

HDS File and Network Attached Storage (NAS)

HUS unified NAS file system and gateway (BlueArc based) enhancements include:

  • New platforms leveraging faster processors (both Intel and Field Programmable Gate Arrays (FPGA’s))
  • Common management and software tools from 3000 to new 4000 series
  • Bandwidth doubled with faster connections and more memory
  • Four 10GbE NAS serving ports (front-end)
  • Four 8Gb Fibre Channel ports (back-end)
  • FPGA leveraged for off-loading some dedupe functions (faster performance)

HDS Unified Complete Platform (UCP)

As part of this announcement, HDS is enhancing the Unified Compute Platform (UCP) offerings. HDS re-entered the compute market in 2012 joining other vendors offering unified compute, storage and networking solutions. The HDS converged data infrastructure competes with AMD (Seamicro) SM15000, Dell vStart and VRTX (for lower end market), EMC and VCE vBlock, NetApp FlexPod along with those from HP (or Moonshot micro servers), IBM Puresystems, Oracle and others.

UCP Pro for VMware vSphere

  • Turnkey converged solution (Compute, Networking, Storage, Software)
  • Includes VMware vSphere pre-installed (OEM from VMware)
  • Flexible compute blade options
  • Three storage system options (HUS, HUS VM and VSP)
  • Cisco and Brocade IP networking
  • UCP Director 3.0 with enhanced automation and orchestration software

UCP Select for Microsoft Private Cloud

  • Supports Hyper-V 3.0 server virtualization
  • Live migration with DR and resynch
  • Microsoft Fast Track certified

UCP Select for Oracle RAC

  • HDS Flash SSD storage
  • SMP x86 compute for performance
  • 2x improvements for IOPS less than 1 millisecond
  • Common management with HiCommand suite
  • Integrated with Oracle RMAN and OVM

UCP Select for SAP HANA

  • Scale out to 8TBs memory (DRAM)
  • Tier 1 storage system certified for SAP HANA DR
  • Leverages SAP HANA SAP storage connector API

What this all means?

Storage I/O trends

With these announcements HDS is extending its storage centric hardware, software and services solution portfolio for block, file and object access across different usage tiers (systems, applications, mediums). HDS is also expanding their converged unified compute platforms to stay competitive with others including Dell, EMC, Fujitsu, HP, IBM, NEC, NetApp and Oracle among others. For environments with HDS storage looking for converged solutions to support VMware, Microsoft Hyper-V, Oracle or SAP HANA these UCP systems are worth checking out as part of evaluating vendor offerings. Likewise for those who have HDS storage exploring SSD offerings, these announcements give opportunities to enable consolidation as do the unified file (NAS) offerings.

Note that now HDS does not have a public formalized message or story around PCIe flash cards, however they have relationships with various vendors as part of their UCP offerings.

Overall a good set of incremental enhancements for HDS to stay competitive and leverage their field proven capabilities including management software tools.

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Putting some VMware ESX storage tips together: (Part II)

In the first part of this post I showed how to use a tip from Dunacn Epping to fake VMware into thinking that a HHDD (Hybrid Hard Disk Drive) was a SSD.

Now lets look at using a tip from Dave Warburton to make an internal SATA HDD into an RDM for one of my Windows-based VMs.

My challenge was that I have a VM with a guest that I wanted to have a Raw Device Mapping (RDM) internal SATA HDD accessible to it, expect the device was an internal SATA device. Given that using the standard tools and reading some of the material available, it would have been easy to give up and quit since the SATA device was not attached to an FC or iSCSI SAN (such as my Iomega IX4 I bought from Amazon.com).

Image of internal RDM with vMware
Image of internal SATA drive being added as a RDM with vClient

Thanks to Dave’s great post that I found, I was able to create a RDM of an internal SATA drive, present it to the existing VM running Windows 7 ultimate and it is now happy, as am I.

Pay close attention to make sure that you get the correct device name for the steps in Dave’s post (link is here).

For the device that I wanted to use, the device name was:

From the ESX command line I found the device I wanted to use which is:

t10.ATA_____ST1500LM0032D9YH148_____Z110S6M5

Then I used the following ESX shell command per Dave’s tip to create an RDM of an internal SATA HDD:

vmkfstools -z /vmfs/devices/disks/ t10.ATA_____ST1500LM0032D9YH148_____Z110S6M5
 /vmfs/volumes/dat1/rdm_ST1500L.vmdk

Then the next steps were to update an existing VM using vSphere client to use the newly created RDM.

Hint, Pay very close attention to your device naming, along with what you name the RDM and where you find it. Also, recommend trying or practicing on a spare or scratch device first, if something is messed up. I practiced on a HDD used for moving files around and after doing the steps in Dave’s post, added the RDM to an existing VM, started the VM and accessed the HDD to verify all was fine (it was). After shutting down the VM, I removed the RDM from it as well as from ESX, and then created the real RDM.

As per Dave’s tip, vSphere Client did not recognize the RDM per say, however telling it to look at existing virtual disks, select browse the data stores, and low and behold, the RDM I was looking for was there. The following shows an example of using vSphere to add the new RDM to one of my existing VMs.

In case you are wondering, why I want to make a non SAN HDD as a RDM vs. doing something else? Simple, the HDD in question is a 1.5TB HDD that has backups on that I want to use as is. The HDD is also bit locker protected and I want the flexibility to remove the device if I have to being accessible via a non-VM based Windows system.


Image of my VMware server with internal RDM and other items

Could I have had accomplished the same thing using a USB attached device accessible to the VM?

Yes, and in fact that is how I do periodic updates to removable media (HDD using Seagate Goflex drives) where I am not as concerned about performance.

While I back up off-site to Rackspace and AWS clouds, I also have a local disk based backup, along with creating periodic full Gold or master off-site copies. The off-site copies are made to removable Seagate Goflex SATA drives using a USB to SATA Goflex cable. I also have the Goflex eSATA to SATA cable that comes in handy to quickly attach a SATA device to anything with an eSATA port including my Lenovo X1.

As a precaution, I used a different HDD that contained data I was not concerned about if something went wrong to test to the process before doing it with the drive containing backup data. Also as a precaution, the data on the backup drive is also backed up to removable media and to my cloud provider.

Thanks again to both Dave and Duncan for their great tips; I hope that you find these and other material on their sites as useful as I do.

Meanwhile, time to get some other things done, as well as continue looking for and finding good work a rounds and tricks to use in my various projects, drop me a note if you see something interesting.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Podcast: vBrownbags, vForums and VMware vTraining with Alastair Cooke

Now also available via

This is a new episode in the continuing StorageIO industry trends and perspectives pod cast series (you can view more episodes or shows along with other audio and video content here) as well as listening via iTunes or via your preferred means using this RSS feed (https://storageio.com/StorageIO_Podcast.xml)

StorageIO industry trends cloud, virtualization and big data

In this episode, we go virtual, both with the topic (virtualization) and communicating around the world via Skype. My guest is Alastair Cooke (@DemitasseNZ) who joins me from New Zealand to talk about VMware education, training and social networking. Some of the topics that we cover include vForums, vBrownbags, VMware VCDX certification, VDI, Autolab, Professional vBrownbag tech talks, coffee and more. If you are into server virtualization or virtual desktop infrastructures (VDI), or need to learn more, Alastair talks about some great resources. Check out Alastairs site www.demitasse.co.nz for more information about the AutoLab, VMware training and education, along with the vBrownbag podcasts that are also available on iTunes as well as the APAC Virtualisation podcasts.

Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Alastair and myself.

StorageIO podcast

Also available via

Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

Enjoy this episode vBrownbags, vForums and VMware vTraining with Alastair Cooke.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Industry trends and perspectives: Chatting with Karl Chen at SNW 2012

This is the second (here is the first SNW 2012 Waynes World) in a series of StorageIO industry trends and perspective audio blog and pod cast about Storage Networking World (SNW) Fall 2012 in Santa Clara California.

StorageIO industry trends cloud, virtualization and big data

Given how at conference conversations tend to occur in the hallways, lobbies and bar areas of venues, what better place to have candid conversations with people from throughout the industry, some you know, some you will get to know better.

In this episode, I’m joined by my co-host Bruce Rave aka Bruce Ravid of Ravid & Associates as we catch up and visit with Chief Marketing Officer (CMO) of Starboard Storage Systems Karl Chen in the Santa Clara Hyatt (event venue) lobby bar area.

Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Karl and Bruce. Our conversations covers SNW, VMworld, Americas Cup Yacht racing, storage technology and networking with people during these events.

StorageIO podcast

Also available via

Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts from SNW and other upcoming events.

Enjoy listening to catching up with Karl Chen from the Fall SNW 2012 pod cast.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Part IV: PureSystems, something old, something new, something from big blue

This is the fourth in a five-part series around the recent IBM PureSystems announcements. You can view the earlier post here, and the next post here.

So what does this mean for IBM Business Partners (BPs) and ISVs?
What could very well differentiate IBM PureSystems from those of other competitors is to take what their partner NetApp has done with FlexPods combing third-party applications from Microsoft and SAP among others and take it to the next level. Similar to what helped make EMC Centera a success (or at least sell a lot of them) was inclusion and leveraging third-party ISVs and BPs  to add value. Compared to other vendors with object based or content accessible storage (CAS) or online archive platforms that focused on the technology feature, function speeds and feeds, EMC realized the key was getting ISVs to support so that BPs and their own direct sales force could sell the solution.

With PureSystems, IBM is revisiting what they have done in the past which if offer bundled solutions providing incentives for ISVs to support and BPs to sell the IBM brand solution. EMC took an early step with including VMware with their Vblock combing server, storage, networking and software with NetApp taking the next step adding SAP, Microsoft and other applications. Dell, HP, Oracle and others are following suit so it only makes sense that IBM returns to its roots leveraging its DNA to reach out and get their ISVs who are now, have been in the past, or are new opportunities to be on board.

IBM is throwing its resources including their innovation centers for training around the world where business partners can get the knowledge and technical support they need. In other words, workshops or seminars on how to sell deploy and setting up of these systems, application and customer testing or proof of concepts and things one would expect out of IBM for such an initiative. In addition to technology and sales training along with marketing support, IBM is making their financing capabilities available to help customers as well as offer incentives to their business partners to simplify acquisitions.

So what buzzword bingo topics and themes did IBM address with this announcement:
IBM did a fantastic job in terms of knocking the ball out of the park with this announcement pertaining buzzword bingo and deserves an atta boy or atta girl!

So what about how this will affect sales of Bladecenters  or other systems?
If all IBM and their BPs do are, encroach on existing systems sales to circle the wagons and protect the installed base, which would be one thing. However if IBM and their BPs can use the new packaging and model approach to reestablish customers and partnerships, or open and expand into new adjacent markets, then the net differences should be more Bladecenters (excuse me, PureFlex) being sold.

So what will this cost?
IBM is citing entry PureSystems Express models starting at around $100,000 USD for base systems with others starting at around $200,000 and $300,000 expandable into larger configurations and budgets. Note that like airlines that advertise a low airfare and then you get to pay extra for peanuts, drinks, extra bag space, changes to reservations and so forth, look at these and related systems not just for the first starting price, also for expansion costs over different time periods. Contact IBM, your BP or ISV to find out what one of these systems will do for and cost you.

So what about VARs and IBM business partners (BPs)?
This could be a boon for those BPs and ISVs  that had previously sold their software solutions bundled with IBM hardware platforms who were being challenged by other converged solution stacks or were being forced to unbundled. This will also allow those business partners to compete on par with other converged solutions or continue selling the pieces of what they are familiar with however under a new umbrellas. Of course, pricing will be a focus and concern for some who will want to see what added value exists vs. acquiring the various components. This also means that IBM will have to make incentives available for their partners to make a living while also allowing their customers to afford solutions and maximize their return on innovation (the new ROI) and enablement.

Click here to view the next post in this series, ok nuff said for now.

Here are some links to learn more:
Various IBM Redbooks and related content
The blame game: Does cloud storage result in data loss?
What do you need when its time to buy a new server?
2012 industry trends perspectives and commentary (predictions)
Convergence: People, Processes, Policies and Products
Buzzword Bingo and Acronym Update V2.011
The function of XaaS(X) Pick a letter
Hard product vs. soft product
Buzzword Bingo and Acronym Update V2.011
Part I: PureSystems, something old, something new, something from big blue
Part II: PureSystems, something old, something new, something from big blue
Part III: PureSystems, something old, something new, something from big blue
Part IV: PureSystems, something old, something new, something from big blue
Part V: PureSystems, something old, something new, something from big blue
Cloud and Virtual Data Storage Networking

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Part V: PureSystems, something old, something new, something from big blue

This is the fifth in a five-part series around the recent IBM PureSystems announcements. You can view the earlier post here.

So what about vendor or technology lock in?
So who is responsible for vendor or technology lock in? When I was working in IT organizations, (e.g. what vendors call the customer) the thinking was vendors are responsible for lock in. Later when I worked for different vendors (manufactures and VARs) the thinking was lock in is what was caused by the competition. More recently I’m of the mind set that vendor lock in is a shared responsibility issue and topic. I’m sure some marketing wiz or sales type will be happy to explain the subtle differences of how their solution does not cause lock in.

Vendor lock in can be a shared responsibility. Generally speaking, lock in, stickiness and account control are essentially the same, or at least strive to get similar results. For example, vendor lock in too some has a negative stigma. However vendor stickiness may be a new term, perhaps even sounding cool thus it is not a concern. Remember the Mary Poppins song a spoon full of sugar makes the medicine go down? In other words, sometimes changing and using a different term such as sticky vs. vendor lock in helps make the situation taste better.

So what should you do?
Take a closer look if you are considering converged infrastructures, cloud or data centers in a box, turnkey application or information services deployment platforms. Likewise, if you are looking at specific technologies such as those from Cisco UCS, Dell vStart, EMC Vblock (or via VCE), HP, NetApp FlexPod or Oracle (ExaLogic, ExaData, etc) among others, also check out the IBM PureSystems (Flex and PureApplication). Compare and contrast these converged solutions with your traditional procurement and deployment modes including cost of acquiring hardware, software, ongoing maintenance or service fees along with value or benefit of bundled tools. There may be a higher cost for converged systems in some scenarios, however compare on the value and benefit derived vs. doing the integration yourself.

Compare and contrast how converged solutions enable, however also consider what constraints exists in terms of flexibility to reconfigure in the future or make other changes. For example as part of integration, does a solution take a lowest common denominator approach to software and firmware revisions for compatibility that may lag behind what you can apply to standalone components. Also, compare and contrast various reference architectures with different solution bundles or packages.

Most importantly compare and evaluate the solutions on their ability to meet and exceed your base requirements while adding value and enabling return on innovation while also being cost-effective. Do not be scared of these bundled solutions; however do your homework to make informed decisions including overcoming any concerns of lock in or future costs and fees. While these types of solutions are cool or interesting from a technology perspective and can streamline acquisition and deployment, make sure that there is a business benefit that can be addressed as well as enablement of new capabilities.

So what does this all mean?
Congratulations to IBM with their PureSystems for leveraging their DNA and roots bundling what had been unbundled before cloud and stacks were popular and trendy. IBM has done a good job of talking vision and strategy along lines of converged and dynamic, elastic and smart, clouds and other themes for past couple of years while selling the pieces as parts of solutions or ala carte or packaged by their ISVs and business partners.

What will be interesting to see is if bladecenter customers shift to buying PureFlex, which should be an immediate boost to give proof points of adoption, while essentially up selling what was previously available. However, more interesting will be to see if net overall new customers and footprints are sold as opposed to simply selling a newer and enhanced version of previous components.

In other words will IBM be able to keep up their focus and execution where they have sold the previous available components, while also holding onto current ISV and BP footprint sales and perhaps enabling those partners to recapture some hardware and solution sales that had been unbundled (e.g. ISV software sold separate of IBM platforms) and move into new adjacent markets.

Here are some links to learn more:
Various IBM Redbooks and related content
The blame game: Does cloud storage result in data loss?
What do you need when its time to buy a new server?
2012 industry trends perspectives and commentary (predictions)
Convergence: People, Processes, Policies and Products
Buzzword Bingo and Acronym Update V2.011
The function of XaaS(X) Pick a letter
Hard product vs. soft product
Buzzword Bingo and Acronym Update V2.011
Part I: PureSystems, something old, something new, something from big blue
Part II: PureSystems, something old, something new, something from big blue
Part III: PureSystems, something old, something new, something from big blue
Part IV: PureSystems, something old, something new, something from big blue
Part V: PureSystems, something old, something new, something from big blue
Cloud and Virtual Data Storage Networking

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Here are some links to learn more:
Various IBM Redbooks and related content
The blame game: Does cloud storage result in data loss?
What do you need when its time to buy a new server?
2012 industry trends perspectives and commentary (predictions)
Convergence: People, Processes, Policies and Products
Buzzword Bingo and Acronym Update V2.011
The function of XaaS(X) – Pick a letter
Hard product vs. soft product
Buzzword Bingo and Acronym Update V2.011
Part I: PureSystems, something old, something new, something from big blue
Part II: PureSystems, something old, something new, something from big blue
Part III: PureSystems, something old, something new, something from big blue
Part IV: PureSystems, something old, something new, something from big blue
Part V: PureSystems, something old, something new, something from big blue
Cloud and Virtual Data Storage Networking

Ok, so what is next, lets see how this unfolds for IBM and their partners.

Nuff said for now.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved