Data Infrastructure Server Storage I/O Tradecraft Trends

Data Infrastructure Server Storage I/O Tradecraft Trends

Updated 1/17/2018

Data Infrastructure trends include server storage I/O network and associated tradecraft are your skills, experiences, insight as well as tricks of the trade, profession and job function (read more about what is a data infrastructure here).

This is the second of a two-part series exploring data infrastructure along with serve storage I/O and related tradecraft. Read part one of this series here.

Data Infrastructures
Data Infrastructure and IT Infrastructure Layers

As a refresher from part one, data infrastructure encompasses servers, storage, I/O and networking along with associated hardware, software, services and management tasks including data protection among others. Tradecraft is knowing about tools, technologies, and trends in your primary domain as well as adjacent focus areas. However, tradecraft is also about knowing how and when to use different technologies, tools with various techniques to address different scenarios.

Tradecraft Trends
Trends involving tradecraft include capturing existing experiences and skills from those who are about to retire or simply move on to something else, as well as learning for those new to IT or servers, storage, I/O, and data infrastructure hardware, software, and services. This means being able to find a balance of old and new tools, techniques, and technologies, including using things in new ways for different situations.

Part of expanding your tradecraft skill set is knowing when to use different tools, techniques, and technologies from proprietary and closed to open solutions, from tightly integrated to loosely integrated, to bundled and converged, or to a la carte or unbundled components, with do-it-yourself (DIY) integration.

Tradecraft also means being able to balance when to make a change of technology, tool, or technique for the sake of change vs. clinging to something comfortable or known, vs. leveraging old and new in new ways while enabling change without disrupting the data infrastructure environment or users of its services.

A couple of other trends include the convergence of people and positions within organizations that may have been in different silos or focus areas in the past. One example is the rise of Development Operations (also known as DevOps), where instead of separate development, administration, and operations areas, they are a combined entity. This might be déja vu for some of you who grew up and gained your tradecraft in similar types of organizations decades ago; for others, it may be something new.

Regarding fundamental tradecraft skills, if you are a hardware person it is wise to learn software; if you are a software person, it is advisable to acquire some hardware experience. Also, don’t be afraid to say “I do not know” or “it depends on on” when asked a question. This also means learning how information technology supports the needs of the business, as well as learning the technology the business uses.

Put another way, in addition to learning server storage I/O hardware and software tradecraft, also learn the basic tradecraft of the business your information systems are supporting. After all, the fundamental role of IT is to protect, preserve, and serve information that enables the company or organization; no business exists just to support IT.

Data Infrastructure Tool Box

How to develop tradecraft?
There are many ways, including reading this book along with the companion websites as well as other books, attending seminars and webinars, participating in forums and user groups, as well as having a test lab to learn and try things. Also, find a mentor you can learn from to help capture some of his or her tradecrafts, and if you are experienced, become a mentor to help others develop their tradecraft.

Toolbox tips, reminders, and recommendations:

  • Create a virtual, software-defined, and physical toolbox.
  • Include tip sheets, notes, hints, tricks, and shortcuts.
  • Leverage books, blogs, websites, tutorials, and related information.
  • Implement a lab or sandbox to try things out
  • Do some proof of concepts (POC) and gain more experience

Tradecraft Tips
Get some hands-on, behind-the-wheel time with various technologies to gain insight, perspective, and appreciation of what others are doing, as well as what is needed to make informed decisions about other areas. This also means learning from looking at demos, trying out software, tools, services, or using other ways to understand the solution. Knowing about the tools and technology is important; however, so too is knowing how to use a tool (techniques) and when along with where or for what. This means knowing the tools in your toolbox, but also knowing when, where, why, and how to use a given tool (or technology), along with techniques to use that tool by itself or with multiple other tools.

Additional tips and considerations include:

  • Expand your social and technical network into adjacent areas.
  • Get involved in user groups, forums, and other venues to learn and give back.
  • Listen, learn, and comprehend vs. only memorizing to pass a test.
  • Find a mentor to help guide you, and become a mentor to help others.
  • Collaborate, share, respect and be respected; the accolades will follow.
  • Evolve from focus on certificates or credentials to expansion of experiences.
  • Connect with others to expand your network

Where to learn more

Continue reading more and expanding your tradecraft experiences with the following among other resources:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this means

Remember that tradecraft is skills, experiences, tricks, and techniques along with knowing what as well as how to use various related tools as part of what it is that you are doing. Your data infrastructure tradecraft is (or should be):

  • Essential skills and experiences spanning different technologies and focus areas
  • Knowing various techniques to use new and old things in new as well as hybrid ways
  • Expanding awareness into adjacent areas around your current focus or interest areas
  • Leveraging comprehension, understanding application of what you know
  • Evolving with new knowledge, experiences, and insight about tools and techniques
  • Hardware, software, services, processes, practices, and management
  • From legacy to software-defined, cloud, virtual, and containers

Part of server storage I/O data infrastructure tradecraft is understanding what tools to use when, where, and why, not to mention knowing how to adapt with those tools, find new ones, or create your own.

Remember, if all you have is a hammer, everything starts to look like a nail. On the other hand, if you have more tools than you know what to do with, or how to use them, perhaps fewer tools are needed along with learning how to use them by enhancing your skillset and tradecraft.

In-between the known data infrastructure server, storage, I/O network, converged infrastructure (CI), hyper-converged infrastructure (HCI), Docker and other containers, cloud, hardware software-defined known, and unknown is your tradecraft. The narrow the gap between the known and the unknown as well as how to apply your experience is the diversity of your tradecraft.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

What Does Converged Infrastructure CI Hyperconverged HCI Mean to Storage I/O?

What Does CI and HCI Mean to Storage I/O?

server storage I/O trends

Updated 1/17/2018

Converged Infrastructure (CI), Hyperconverged Infrastructure (HCI) along with Cluster or Cloud In Box (CIB) are popular trend topics that have gained both industry and customer adoption as part of data infrastructures. Data Infrastructures exists to support business, cloud and information technology (IT) among other applications that transform data into information or services. The fundamental role of legacy and software defined data infrastructures (SDDI) is to provide a platform environment for applications and data that is resilient, flexible, scalable, agile, efficient as well as cost-effective.

Software Defined Data Infrastructure overview

Business, IT Information, Data and other Infrastructures

Put another way, data infrastructures exist to protect, preserve, process, move, secure and serve data as well as their applications for information services delivery. Technologies that make up data infrastructures include hardware, software, cloud or managed services, servers, storage, I/O and networking along with people, processes, policies along with various tools spanning legacy, software-defined virtual, containers and cloud.

As part of data infrastructures, CI, CIB and HCI enable simplified deployment of resources (servers, storage, I/O networking, hardware, software) across different environments. What do these various approaches (CI, HCI, CiB) mean for a hyperconverged (and converged) storage environment? What are the key concerns and considerations related specifically to storage? Most importantly, how do you know that you’re asking the right questions in order to get to the right answers?

Join me on March 15 at 10:00 AM PT for a live (free) webinar organized by the Storage Network Industry Association (SNIA) Ethernet Storage Forum (ESF). In this webinar (What Does Hyperconverged Mean to Storage) I will be joined by SNIA ESF chair John Kim of Mellanox to discuss moving beyond the hype to prepare, plan and make decisions for deploying CI, CiB and HCI.

Some of the server, storage I/O and related topics we will be discussing during the webcast include:

  • What are the storage considerations for CI, CIB and HCI
  • Fast applications and fast servers need fast server storage I/O
  • Fast NVM storage including NVMe, flash and SSD
  • Networking and server storage I/O considerations
  • How to avoid aggravation-causing aggregation (bottlenecks)
  • Aggregated vs. desegregated vs. hybrid converged
  • Planning, comparing, benchmarking and decision-making
  • Data protection, management and east-west I/O traffic
  • Application and server I/O north-south traffic
  • Where To Learn More

  • SNIA ESF organized webinar on BrightTalk March 15, 2017
  • StorageIO.com (events, news, tips, resources) and StorageIOblog.com
  • Cloud and Virtual Data Storage Networking (CRC)
  • Software-Defined Data Infrastructure Essentials (CRC)
  • Data Infrastructure Primer and Overview (Its Whats Inside The Data Center)
  • Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    For many environments some form of converged, desegregated, aggregated or hyper-converged solution or approach will part of their data infrastructures. Join the SNIA ESF folks and me on March 15, 2017 (bring your questions) to discuss CI and HCI storage I/O topics, trends, technologies and themes.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Updated Software Defined Data Infrastructure Webinars and Fall 2016 events

    Software Defined Data Infrastructure Webinars and Fall 2016 events

    server storage I/O trends

    Here is the updated Server StorageIO fall 2016 webinar and event activities covering software defined data center, data infrastructure, virtual, cloud, containers, converged, hyper-converged server, storage, I/O network, performance and data protection among other topics.

    December 7, 2016 – Webinar 11AM PT – BrightTalk
    Hyper-Converged Infrastructure Decision Making

    Hyper-Converged Infrastructure, HCI and CI Decision Making

    Are Converged Infrastructures (CI), Hyper-Converged Infrastructures (HCI), Cluster in Box or Cloud in Box (CiB) solutions for you? The answer is it depends on what your needs, requirements, application among other criteria are. In addition are you focused on a particular technology solution or architecture approach, or, looking for something that adapts to your needs? Join us in this discussion exploring your options for different scenarios as we look beyond they hype including to next wave of hyper-scale converged along with applicable decision-making criteria. Topics include:

    – Data Infrastructures exist to support applications and their underlying resource needs
    – What are your application and environment needs along with other objectives
    – Explore various approaches for hyper-small and hyper-large environments
    – What are you converging, hardware, hypervisors, management or something else?
    – Does HCI mean hyper-vendor-lock-in, if so, is that a bad thing?
    – When, where, why and how to use different scenarios

    November 29-30, 2016 (New) – Converged & Hyper-Converged Decision Making
    Is Converged Infrastructure Right For You?
    Workshop Seminar – Nijkerk The Netherlands

    Converged and server storage I/O data infrastructure trends
    Agenda and topics to be covered include:

    • When should decide to evaluate CI/HCI vs. traditional approach
    • What are decision and evaluation criteria for apples to apples vs. Apples to pears
    • What are the costs, benefits, and caveats of the different approaches
    • How different applications such as VDI or VSI or database have different needs
    • What are the network, storage, software license and training cost implications
    • Different comparison criteria for smaller environments remote office vs. Larger enterprise
    • How will you protect and secure a CI, HCI environment (HA, BC, BR, DR, Backup)
    • What is the risk and benefit of startups, companies with limited portfolios vs. Big vendors
    • Do it yourself (DiY) vs. Turnkey software vs. Bundled tin wrapped software solution
    • We will also look at associated trends including software-defined, NVM/SSD, NVMe, VMware, Microsoft, KVM, Citrix/Xen, Docker, OpenStack among others.

    Organized by:
    Brouwer Storage Consultancy

    November 28, 2016 (New) – Server Storage I/O Fundamental Trends V2.1116
    Whats New, Whats the buzz, what you need to know about and whos doing what
    Workshop Seminar – Nijkerk The Netherlands

    Converged and server storage I/O data infrastructure trends
    Agenda and topics that will be covered include:

    • Who’s doing what, who are the new emerging vendors, solutions and technologies to watch
    • Non-Volatile Memory (NVM), flash solid state device (SSD), Storage Class Memory (SCM)
    • Networking with your servers and storage including NVMe, NVMeoF and RoCE
    • Cloud, Object and Bulk storage for data protection, archiving, near-line, scale-out
    • Data protection and software defined storage management (backup, BC, BR, DR, archive)
    • Microsoft Windows Server 2016, Nano, S2D and Hyper-V
    • VMware, OpenStack, Ceph, Docker and Containers, CI and HCI
    • EMC is gone, now there is Dell EMC and what that means
    • Various vendors and solutions from legacy to new and emerging
    • Recommendations, usage or deployment scenarios and tips
    • Some examples of who’s doing what includes AWS, Brocade, Cisco, Dell EMC, Enmotus, Futjistu, Google, HDS, HP and Huawei, IBM, Intel, Lenovo, Mellanox, Micron, Microsoft, NetApp, Nutanix, Oracle, Pure, Quantum, Qumulo, Reduxio, Rubrik, Samsung, SANdisk, Seagate, Simplivity and Tintri, Veeam, Veritas, VMware and WD among others.

    Organized by:
    Brouwer Storage Consultancy

    November 23, 2016 – Webinar 10AM PT BrightTalk
    BCDR and Cloud Backup Software Defined Data Infrastructures (SDDI) and Data Protection

    BC DR Cloud Backup and Data Protection

    The answer is BCDR and Cloud Backup, however what was the question? Besides how to protect preserve and secure your data, applications along with data Infrastructures against various threat risk issues, what are some other common questions? For example how to modernize, rethink, re-architect, use new and old things in new ways, these and other topics, techniques, trends, tools have a common theme of BCDR and Cloud Backup. Join us in this discussion exploring your options for protecting data, applications and your data Infrastructures spanning legacy, software-defined virtual and cloud environments. Topics include:

    – Data Infrastructures exist to support applications and their underlying resource needs
    – Various cloud storage options to meet different application PACE needs
    – Do clouds need to be backed-up or protected?
    – How to leverage clouds for various data protection objectives
    – When, where, why and how to use different scenarios

    November 23, 2016 – Webinar 9AM PT – BrightTalk
    Cloud Storage – Hybrid and Software Defined Data Infrastructures (SDDI)

    Cloud Storage Decision Making

    You have been told, or determined that you need (or want) to use cloud storage, ok, now what? What type of cloud storage do you need or want, or do you simply want cloud storage? However, what are your options as well as application requirements including Performance, Availability, Capacity and Economics (PACE) along with access or interfaces? Where are your applications and where will they be located? What are your objectives for using cloud storage or is it simply you have heard or told its cheaper. Join us in this discussion exploring your options, considerations for cloud storage decision-making. Topics include:

    – Data Infrastructures exist to support applications and their underlying resource needs
    – Various cloud storage options to meet different application PACE needs
    – Storage for primary, secondary, performance, availability, capacity, backup, archiving
    – Public, private and hybrid cloud storage options from block, file, object to application service
    – When, where, why and how to use cloud storage for different scenarios

    November 22, 2016 – Webinar 10AM PT – BrightTalk
    Cloud Infrastructure Hybrid and Software Defined Data Infrastructures (SDDI)

    Cloud Infrastructure and Hybrid Software Defined

    At the core of cloud (public, private, hybrid) next generation data centers are software defined data infrastructures that exist to protect, preserve and serve applications, data along with their resulting information services. Software defined data infrastructure core components include hardware, software servers and storage configured (defined) to provide various services enabling application Performance Availability Capacity and Economics (PACE). Just as there are different types of environments, applications along with workloads various options, technologies as well as techniques exist for cloud services (and underlying data infrastructures). Join us in this session to discuss trends, technologies, tools, techniques and services options for cloud infrastructures. Topics include:

    – Data Infrastructures exist to support applications and their underlying resource needs
    – Software Defined Infrastructures (SDDI) are what enable Software Defined Data Centers and clouds
    – Various types of clouds along with cloud services that determine how resources get defined
    – When, where, why and how to use cloud Infrastructures along with associated resources

    November 15, 2016 (New) – 11AM PT Webinar – Redmond Magazine and Solarwinds
    The O.A.R. of Virtualization Scaling
    A journey of optimization, automation, and reporting

    Your journey to a flexible, scalable and secure IT universe begins now. Join Microsoft MVP and VMware vSAN and vExpert Greg Schulz of Server StorageIO along with VMware vExpert, Cisco Champion and Head Geek of Virtualization and Cloud Practice Kong Yang of SolarWinds for an interactive discussion empowering you to become the master of your software defined and virtual data center. Topics will include:

    • Trust your instruments and automation, however, verify they are working properl
    • Insight into how your environment, as well as automation tools, are working
    • Leverage automation to handle recurring tasks so you can focus on more productive activities
    • Capture, retain and transfer knowledge and tradecraft experiences into automation policies
    • Automated system management is only as good as the policies and data they rely upon
    • Optimize via automation that relies on reporting for insight, awareness and analytics 

    November 3, 2016 (New) – Webinar 11AM PT – Redmond Magazine and
    Dell Software
    Tailor Your Backup Data Repositories to
    Fit Your Security and Management Needs

    Does data protection storage have you working overtime to take care of it? Do you have the flexibility to protect, preserve, secure and serve different workgroups or customers in a shared environment? Is your environment looking to expand with new applications and remote offices, yet your data protection is slowing you down? 

    In this webinar we will look at current and emerging trends along with issues including how different threat risk challenges impact your evolving environment, as well as opportunities to address them. It’s time to deploy technology that works for you and your environment instead of you working for the solution. 

    Attend and learn about:

    • Data protection trends, issues, regulatory compliance, challenges and opportunities
    • How to utilize purpose built appliances to protect and defend your systems, applications and data from various threat risks
    • Importance of timely insight and situational awareness into your data protection infrastructure
    • Protecting centralized and distributed remote office branch offices (ROBO) workgroups
    • What you can do today to optimize your environment

    October 27, 2016 (New) – Webinar 10AM PT – Virtual Instruments
    The Value of Infrastructure Insight

    This webinar looks at the value of data center infrastructure insight both as a technology as well as a business productivity enabler. Besides productivity, having insight into how data infrastructure resources (servers, storage, networks, system software) are used, enables informed analysis, troubleshooting, planning, forecasting as well as cost-effective decision-making. In other words, data center infrastructure insight, based on infrastructure performance analytics, enables you to avoid flying blind, having situational awareness for proactive Information Technology (IT) management. Your return on innovation is increased, and leveraging insight awareness along with metrics that matter drives return on investment (ROI) along with enhanced service delivery.

    October 20, 2016 – Webinar 9AM PT – BrightTalk
    Next-Gen Data Centers Software Defined Data Infrastructures (SDDI) including Servers, Storage and Virtualization

    Cloud Storage Decision Making

    At the core of next generation data centers are software defined data infrastructures that enable, protect, preserve and serve applications, data along with their resulting information services. Software defined data infrastructure core components include hardware, software servers and storage configured (defined) to provide various services enabling application Performance Availability Capacity and Economics (PACE). Just as there are different types of environments, applications along with workloads various options, technologies as well as techniques exist for virtual servers and storage. Join us in this session to discuss trends, technologies, tools, techniques and services around storage and virtualization for today, tomorrow, and in the years to come. Topics include:

    – Data Infrastructures exist to support applications and their underlying resource needs
    – Software Defined Infrastructures (SDDI) are what enable Software Defined Data Centers
    – Server and Storage Virtualization better together, with and without CI/HCI
    – Many different facets (types) of Server virtualization and virtual storage
    – When, where, why and how to use storage virtualization and virtual storage

    September 20, 2016 – Webinar 8AM PT – BrightTalk
    Software Defined Data Infrastructures (SDDI) Enabling Software Defined Data Centers – Part of Software-Defined Storage summit

    Cloud Storage Decision Making

    Data Infrastructures exist to support applications and their underlying resource needs. Software-Defined Infrastructures (SDI) are what enable Software-Defined Data Centers, and at the heart of a SDI is storage that is software-defined. This spans cloud, virtual and physical storage and is at the focal point of today. Join us in this session to discuss trends, technologies, tools, techniques and services around SDI and SDDC- today, tomorrow, and in the years to come.

    September 13, 2016 – Webinar 11AM PT – Redmond Magazine and
    Dell Software
    Windows Server 2016 and Active Directory
    Whats New and How to Plan for Migration

    Windows Server 2016 is expected to GA this fall and is a modernized version of the Microsoft operating system that includes new capabilities such as Active Directory (AD) enhancements. AD is critical to organizational operations providing control and secure access to data, networks, servers, storage and more from physical, virtual and cloud (public and hybrid). But over time, organizations along with their associated IT infrastructures have evolved due to mergers, acquisitions, restructuring and general growth. As a result, yesterday’s AD deployments may look like they did in the past while using new technology (e.g. in old ways). Now is the time to start planning for how you will optimize your AD environment using new tools and technologies such as those in Windows Server 2016 and AD in new ways. Optimizing AD means having a new design, performing cleanup and restructuring prior to migration vs. simply moving what you have. Join us for this interactive webinar to begin planning your journey to Windows Server 2016 and a new optimized AD deployment that is flexible, scalable and elastic, and enables resilient infrastructures. You will learn:

    • What’s new in Windows Server 2016 and how it impacts your AD
    • Why an optimized AD is critical for IT environments moving forward
    • How to gain insight into your current AD environment
    • AD restructuring planning considerations

    September 8, 2016 – Webinar 11AM PT (Watch on Demand) – Redmond Magazine, Acronis and Unitrends
    Data Protection for Modern Microsoft Environments

    Your organization’s business depends on modern Microsoft® environments — Microsoft Azure and new versions of Windows Server 2016, Microsoft Hyper-V with RCT, and business applications — and you need a data protection solution that keeps pace with Microsoft technologies. If you lose mission-critical data, it can cost you $100,000 or more for a single hour of downtime. Join our webinar and learn how different data protection solutions can protect your Microsoft environment, whether you store data on company premises, at remote locations, in private and public clouds, and on mobile devices.

    Where To Learn More

    What This All Means

    Its fall back to school and learning time, join me on these and other upcoming event activities.

    Ok, nuff said, for now…

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) and twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Back To Software Defined Data Infrastructure School, Webinar and Fall 2016 events

    Software Defined Data Infrastructure Webinars and Fall 2016 events

    server storage I/O trends

    Its September and that means back to school time, and not just for the kids. Here is the preliminary Server StorageIO fall 2016 back to school, webinar and event activities covering software defined data center, data infrastructure, virtual, cloud, containers, converged, hyper-converged server, storage, I/O network, performance and data protection among other topics.

    December 7, 2016 – Webinar 11AM PT – BrightTalk
    Hyper-Converged Infrastructure Decision Making

    Are Converged Infrastructures (CI), Hyper-Converged Infrastructures (HCI), Cluster in Box or Cloud in Box (CiB) solutions for you? The answer is it depends on what your needs, requirements, application among other criteria are. In addition are you focused on a particular technology solution or architecture approach, or, looking for something that adapts to your needs? Join us in this discussion exploring your options for different scenarios as we look beyond they hype including to next wave of hyper-scale converged along with applicable decision-making criteria. Topics include:

    – Data Infrastructures exist to support applications and their underlying resource needs
    – What are your application and environment needs along with other objectives
    – Explore various approaches for hyper-small and hyper-large environments
    – What are you converging, hardware, hypervisors, management or something else?
    – Does HCI mean hyper-vendor-lock-in, if so, is that a bad thing?
    – When, where, why and how to use different scenarios

    November 23, 2016 – Webinar 10AM PT BrightTalk
    BCDR and Cloud Backup Software Defined Data Infrastructures (SDDI) and Data Protection

    The answer is BCDR and Cloud Backup, however what was the question? Besides how to protect preserve and secure your data, applications along with data Infrastructures against various threat risk issues, what are some other common questions? For example how to modernize, rethink, re-architect, use new and old things in new ways, these and other topics, techniques, trends, tools have a common theme of BCDR and Cloud Backup. Join us in this discussion exploring your options for protecting data, applications and your data Infrastructures spanning legacy, software-defined virtual and cloud environments. Topics include:

    – Data Infrastructures exist to support applications and their underlying resource needs
    – Various cloud storage options to meet different application PACE needs
    – Do clouds need to be backed-up or protected?
    – How to leverage clouds for various data protection objectives
    – When, where, why and how to use different scenarios

    November 23, 2016 – Webinar 9AM PT – BrightTalk
    Cloud Storage – Hybrid and Software Defined Data Infrastructures (SDDI)

    You have been told, or determined that you need (or want) to use cloud storage, ok, now what? What type of cloud storage do you need or want, or do you simply want cloud storage? However, what are your options as well as application requirements including Performance, Availability, Capacity and Economics (PACE) along with access or interfaces? Where are your applications and where will they be located? What are your objectives for using cloud storage or is it simply you have heard or told its cheaper. Join us in this discussion exploring your options, considerations for cloud storage decision-making. Topics include:

    – Data Infrastructures exist to support applications and their underlying resource needs
    – Various cloud storage options to meet different application PACE needs
    – Storage for primary, secondary, performance, availability, capacity, backup, archiving
    – Public, private and hybrid cloud storage options from block, file, object to application service
    – When, where, why and how to use cloud storage for different scenarios

    November 22, 2016 – Webinar 10AM PT – BrightTalk
    Cloud Infrastructure Hybrid and Software Defined Data Infrastructures (SDDI)

    At the core of cloud (public, private, hybrid) next generation data centers are software defined data infrastructures that exist to protect, preserve and serve applications, data along with their resulting information services. Software defined data infrastructure core components include hardware, software servers and storage configured (defined) to provide various services enabling application Performance Availability Capacity and Economics (PACE). Just as there are different types of environments, applications along with workloads various options, technologies as well as techniques exist for cloud services (and underlying data infrastructures). Join us in this session to discuss trends, technologies, tools, techniques and services options for cloud infrastructures. Topics include:

    – Data Infrastructures exist to support applications and their underlying resource needs
    – Software Defined Infrastructures (SDDI) are what enable Software Defined Data Centers and clouds
    – Various types of clouds along with cloud services that determine how resources get defined
    – When, where, why and how to use cloud Infrastructures along with associated resources

    October 27, 2016 – Webinar 10AM PT – Virtual Instruments
    The Value of Infrastructure Insight

    This webinar looks at the value of data center infrastructure insight both as a technology as well as a business productivity enabler. Besides productivity, having insight into how data infrastructure resources (servers, storage, networks, system software) are used, enables informed analysis, troubleshooting, planning, forecasting as well as cost-effective decision-making. In other words, data center infrastructure insight, based on infrastructure performance analytics, enables you to avoid flying blind, having situational awareness for proactive Information Technology (IT) management. Your return on innovation is increased, and leveraging insight awareness along with metrics that matter drives return on investment (ROI) along with enhanced service delivery.

    October 20, 2016 – Webinar 9AM PT – BrightTalk
    Next-Gen Data Centers Software Defined Data Infrastructures (SDDI) including Servers, Storage and Virtualizations

    At the core of next generation data centers are software defined data infrastructures that enable, protect, preserve and serve applications, data along with their resulting information services. Software defined data infrastructure core components include hardware, software servers and storage configured (defined) to provide various services enabling application Performance Availability Capacity and Economics (PACE). Just as there are different types of environments, applications along with workloads various options, technologies as well as techniques exist for virtual servers and storage. Join us in this session to discuss trends, technologies, tools, techniques and services around storage and virtualization for today, tomorrow, and in the years to come. Topics include:

    – Data Infrastructures exist to support applications and their underlying resource needs
    – Software Defined Infrastructures (SDDI) are what enable Software Defined Data Centers
    – Server and Storage Virtualization better together, with and without CI/HCI
    – Many different facets (types) of Server virtualization and virtual storage
    – When, where, why and how to use storage virtualization and virtual storage

    September 20, 2016 – Webinar 8AM PT – BrightTalk
    Software Defined Data Infrastructures (SDDI) Enabling Software Defined Data Centers – Part of Software-Defined Storage summit

    Data Infrastructures exist to support applications and their underlying resource needs. Software-Defined Infrastructures (SDI) are what enable Software-Defined Data Centers, and at the heart of a SDI is storage that is software-defined. This spans cloud, virtual and physical storage and is at the focal point of today. Join us in this session to discuss trends, technologies, tools, techniques and services around SDI and SDDC- today, tomorrow, and in the years to come.

    September 13, 2016 – Webinar 11AM PT – Redmond Magazine and
    Dell Software
    Windows Server 2016 and Active Directory
    Whats New and How to Plan for Migration

    Windows Server 2016 is expected to GA this fall and is a modernized version of the Microsoft operating system that includes new capabilities such as Active Directory (AD) enhancements. AD is critical to organizational operations providing control and secure access to data, networks, servers, storage and more from physical, virtual and cloud (public and hybrid). But over time, organizations along with their associated IT infrastructures have evolved due to mergers, acquisitions, restructuring and general growth. As a result, yesterday’s AD deployments may look like they did in the past while using new technology (e.g. in old ways). Now is the time to start planning for how you will optimize your AD environment using new tools and technologies such as those in Windows Server 2016 and AD in new ways. Optimizing AD means having a new design, performing cleanup and restructuring prior to migration vs. simply moving what you have. Join us for this interactive webinar to begin planning your journey to Windows Server 2016 and a new optimized AD deployment that is flexible, scalable and elastic, and enables resilient infrastructures. You will learn:

    • What’s new in Windows Server 2016 and how it impacts your AD
    • Why an optimized AD is critical for IT environments moving forward
    • How to gain insight into your current AD environment
    • AD restructuring planning considerations

    September 8, 2016 – Webinar 11AM PT (Watch on Demand) – Redmond Magazine, Acronis and Unitrends
    Data Protection for Modern Microsoft Environments

    Your organization’s business depends on modern Microsoft® environments — Microsoft Azure and new versions of Windows Server 2016, Microsoft Hyper-V with RCT, and business applications — and you need a data protection solution that keeps pace with Microsoft technologies. If you lose mission-critical data, it can cost you $100,000 or more for a single hour of downtime. Join our webinar and learn how different data protection solutions can protect your Microsoft environment, whether you store data on company premises, at remote locations, in private and public clouds, and on mobile devices.

    Where To Learn More

    What This All Means

    Its back to school and learning time, join me on these and other upcoming event activities.

    Ok, nuff said, for now…

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage

    3D XPoint NVM persistent memory PM storage class memory SCM


    Storage I/O trends

    Updated 1/31/2018

    This is the first of a three-part series on Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage announcement. Read Part II here and Part III here.

    In a webcast the other day, Intel and Micron announced new 3D XPoint non-volatile memory (NVM) that can be used for both primary main memory (e.g. what’s in computers, serves, laptops, tablets and many other things) in place of Dynamic Random Access Memory (DRAM), for persistent storage faster than today’s NAND flash-based solid state devices (SSD), not to mention future hybrid usage scenarios. Note that this announcement while having the common term 3D in it is different from the earlier Intel and Micron announcement about 3D NAND flash (read more about that here).

    Twitter hash tag #3DXpoint

    The big picture, why this type of NVM technology is needed

    Server and Storage I/O trends

    • Memory is storage and storage is persistent memory
    • No such thing as a data or information recession, more data being create, processed and stored
    • Increased demand is also driving density along with convergence across server storage I/O resources
    • Larger amounts of data needing to be processed faster (large amounts of little data and big fast data)
    • Fast applications need more and faster processors, memory along with I/O interfaces
    • The best server or storage I/O is the one you do not need to do
    • The second best I/O is one with least impact or overhead
    • Data needs to be close to processing, processing needs to be close to the data (locality of reference)


    Server Storage I/O memory hardware and software hierarchy along with technology tiers

    What did Intel and Micron announce?

    Intel SVP and General Manager Non-Volatile Memory solutions group Robert Crooke (Left) and Micron CEO D. Mark Durcan did the joint announcement presentation of 3D XPoint (webinar here). What was announced is the 3D XPoint technology jointly developed and manufactured by Intel and Micron which is a new form or category of NVM that can be used for both primary memory in servers, laptops, other computers among other uses, as well as for persistent data storage.


    Robert Crooke (Left) and Mark Durcan (Right)

    Summary of 3D XPoint announcement

    • New category of NVM memory for servers and storage
    • Joint development and manufacturing by Intel and Micron in Utah
    • Non volatile so can be used for storage or persistent server main memory
    • Allows NVM to scale with data, storage and processors performance
    • Leverages capabilities of both Intel and Micron who have collaborated in the past
    • Performance Intel and Micron claim up to 1000x faster vs. NAND flash
    • Availability persistent NVM compared to DRAM with better durability (life span) vs. NAND flash
    • Capacity densities about 10x better vs. traditional DRAM
    • Economics cost per bit between dram and nand (depending on packaging of resulting products)

    What applications and products is 3D XPoint suited for?

    In general, 3D XPoint should be able to be used for many of the same applications and associated products that current DRAM and NAND flash-based storage memories are used for. These range from IT and cloud or managed service provider data centers based applications and services, as well as consumer focused among many others.


    3D XPoint enabling various applications

    In general, applications or usage scenarios along with supporting products that can benefit from 3D XPoint include among others’. Applications that need larger amounts of main memory in a denser footprint such as in-memory databases, little and big data analytics, gaming, wave form analysis for security, copyright or other detection analysis, life sciences, high performance compute and high-productivity compute, energy, video and content severing among many others.

    In addition, applications that need persistent main memory for resiliency, or to cut delays and impacts for planned or un-planned maintenance or having to wait for memories and caches to be warmed or re-populated after a server boot (or re-boot). 3D XPoint will also be useful for those applications that need faster read and write performance compared to current generations NAND flash for data storage. This means both existing and emerging applications as well as some that do not yet exist will benefit from 3D XPoint over time, like how today’s applications and others have benefited from DRAM used in Dual Inline Memory Module (DIMM) and NAND flash advances over the past several decades.

    Where to read, watch and learn more

    Storage I/O trends

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Continue reading Part II here and Part III here of this three-part series on Intel and Micron 3D XPoint along with more analysis and commentary.

    Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Server Storage I/O Benchmark Performance Resource Tools

    Server Storage I/O Benchmarking Performance Resource Tools

    server storage I/O trends

    Updated 1/23/2018

    Server storage I/O benchmark performance resource tools, various articles and tips. These include tools for legacy, virtual, cloud and software defined environments.

    benchmark performance resource tools server storage I/O performance

    The best server and storage I/O (input/output operation) is the one that you do not have to do, the second best is the one with the least impact.

    server storage I/O locality of reference

    This is where the idea of locality of reference (e.g. how close is the data to where your application is running) comes into play which is implemented via tiered memory, storage and caching shown in the figure above.

    Cloud virtual software defined storage I/O

    Server storage I/O performance applies to cloud, virtual, software defined and legacy environments

    What this has to do with server storage I/O (and networking) performance benchmarking is keeping the idea of locality of reference, context and the application workload in perspective regardless of if cloud, virtual, software defined or legacy physical environments.

    StorageIOblog: I/O, I/O how well do you know about good or bad server and storage I/Os?
    StorageIOblog: Server and Storage I/O benchmarking 101 for smarties
    StorageIOblog: Which Enterprise HDDs to use for a Content Server Platform (7 part series with using benchmark tools)
    StorageIO.com: Enmotus FuzeDrive MicroTiering lab test using various tools
    StorageIOblog: Some server storage I/O benchmark tools, workload scripts and examples (Part I) and (Part II)
    StorageIOblog: Get in the NVMe SSD game (if you are not already)
    Doridmen.com: Transcend SSD360S Review with tips on using ATTO and Crystal benchmark tools
    ComputerWeekly: Storage performance metrics: How suppliers spin performance specifications

    Via StorageIO Podcast: Kevin Closson discusses SLOB Server CPU I/O Database Performance benchmarks
    Via @KevinClosson: SLOB Use Cases By Industry Vendors. Learn SLOB, Speak The Experts’ Language
    Via BeyondTheBlocks (Reduxio): 8 Useful Tools for Storage I/O Benchmarking
    Via CCSIObench: Cold-cache Sequential I/O Benchmark
    Doridmen.com: Transcend SSD360S Review with tips on using ATTO and Crystal benchmark tools
    CISJournal: Benchmarking the Performance of Microsoft Hyper-V server, VMware ESXi and Xen Hypervisors (PDF)
    Microsoft TechNet:Windows Server 2016 Hyper-V large-scale VM performance for in-memory transaction processing
    InfoStor: What’s The Best Storage Benchmark?
    StorageIOblog: How to test your HDD, SSD or all flash array (AFA) storage fundamentals
    Via ATTO: Atto V3.05 free storage test tool available
    Via StorageIOblog: Big Files and Lots of Little File Processing and Benchmarking with Vdbench

    Via StorageIO.com: Which Enterprise Hard Disk Drives (HDDs) to use with a Content Server Platform (White Paper)
    Via VMware Blogs: A Free Storage Performance Testing Tool For Hyperconverged
    Microsoft Technet: Test Storage Spaces Performance Using Synthetic Workloads in Windows Server
    Microsoft Technet: Microsoft Windows Server Storage Spaces – Designing for Performance
    BizTech: 4 Ways to Performance-Test Your New HDD or SSD
    EnterpriseStorageForum: Data Storage Benchmarking Guide
    StorageSearch.com: How fast can your SSD run backwards?
    OpenStack: How to calculate IOPS for Cinder Storage ?
    StorageAcceleration: Tips for Measuring Your Storage Acceleration

    server storage I/O STI and SUT

    Spiceworks: Determining HDD SSD SSHD IOP Performance
    Spiceworks: Calculating IOPS from Perfmon data
    Spiceworks: profiling IOPs

    vdbench server storage I/O benchmark
    Vdbench example via StorageIOblog.com

    StorageIOblog: What does server storage I/O scaling mean to you?
    StorageIOblog: What is the best kind of IO? The one you do not have to do
    Testmyworkload.com: Collect and report various OS workloads
    Whoishostingthis: Various SQL resources
    StorageAcceleration: What, When, Why & How to Accelerate Storage
    Filesystems.org: Various tools and links
    StorageIOblog: Can we get a side of context with them IOPS and other storage metrics?

    flash ssd and hdd

    BrightTalk Webinar: Data Center Monitoring – Metrics that Matter for Effective Management
    StorageIOblog: Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy
    StorageIOblog: Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?

    server storage I/O bottlenecks and I/O blender

    Microsoft TechNet: Measuring Disk Latency with Windows Performance Monitor (Perfmon)
    Via Scalegrid.io: How to benchmark MongoDB with YCSB? (Perfmon)
    Microsoft MSDN: List of Perfmon counters for sql server
    Microsoft TechNet: Taking Your Server’s Pulse
    StorageIOblog: Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?
    CMG: I/O Performance Issues and Impacts on Time-Sensitive Applications

    flash ssd and hdd

    Virtualization Practice: IO IO it is off to Storage and IO metrics we go
    InfoStor: Is HP Short Stroking for Performance and Capacity Gains?
    StorageIOblog: Is Computer Data Storage Complex? It Depends
    StorageIOblog: More storage and IO metrics that matter
    StorageIOblog: Moving Beyond the Benchmark Brouhaha
    Yellow-Bricks: VSAN VDI Benchmarking and Beta refresh!

    server storage I/O benchmark example

    YellowBricks: VSAN performance: many SAS low capacity VS some SATA high capacity?
    YellowBricsk: VSAN VDI Benchmarking and Beta refresh!
    StorageIOblog: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review
    StorageIOblog: Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review
    StorageIOblog: Server Storage I/O Network Benchmark Winter Olympic Games

    flash ssd and hdd

    VMware VDImark aka View Planner (also here, here and here) as well as VMmark here
    StorageIOblog: SPC and Storage Benchmarking Games
    StorageIOblog: Speaking of speeding up business with SSD storage
    StorageIOblog: SSD and Storage System Performance

    Hadoop server storage I/O performance
    Various Server Storage I/O tools in a hadoop environment

    Michael-noll.com: Benchmarking and Stress Testing an Hadoop Cluster With TeraSort, TestDFSIO
    Virtualization Practice: SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD
    StorageIOblog: Storage and IO metrics that matter
    InfoStor: Storage Metrics and Measurements That Matter: Getting Started
    SilvertonConsulting: Storage throughput vs. IO response time and why it matters
    Splunk: The percentage of Read / Write utilization to get to 800 IOPS?

    flash ssd and hdd
    Various server storage I/O benchmarking tools

    Spiceworks: What is the best IO IOPs testing tool out there
    StorageIOblog: How many IOPS can a HDD, HHDD or SSD do?
    StorageIOblog: Some Windows Server Storage I/O related commands
    Openmaniak: Iperf overview and Iperf.fr: Iperf overview
    StorageIOblog: Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)
    Quest: SQL Server Perfmon Poster (PDF)
    Server and Storage I/O Networking Performance Management (webinar)
    Data Center Monitoring – Metrics that Matter for Effective Management (webinar)
    Flash back to reality – Flash SSD Myths and Realities (Industry trends & benchmarking tips), (MSP CMG presentation)
    DBAstackexchange: How can I determine how many IOPs I need for my AWS RDS database?
    ITToolbox: Benchmarking the Performance of SANs

    server storage IO labs

    StorageIOblog: Dell Inspiron 660 i660, Virtual Server Diamond in the rough (Server review)
    StorageIOblog: Part II: Lenovo TS140 Server and Storage I/O Review (Server review)
    StorageIOblog: DIY converged server software defined storage on a budget using Lenovo TS140
    StorageIOblog: Server storage I/O Intel NUC nick knack notes First impressions (Server review)
    StorageIOblog & ITKE: Storage performance needs availability, availability needs performance
    StorageIOblog: Why SSD based arrays and storage appliances can be a good idea (Part I)
    StorageIOblog: Revisiting RAID storage remains relevant and resources

    Interested in cloud and object storage visit our objectstoragecenter.com page, for flash SSD checkout storageio.com/ssd page, along with data protection, RAID, various industry links and more here.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Watch for additional links to be added above in addition to those that appear via comments.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    VMware VVOLs storage I/O fundementals (Part 1)

    VMware VVOL’s storage I/O fundamentals (Part I)

    Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g.VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).

    Some of you may already be participating in the VMware beta of VVOL involving one of the initial storage vendors also in the beta program.

    Ok, now let’s go a bit deeper, however if you want some good music to listen to while reading this, check out @BruceRave GoDeepMusic.Net and shows here.

    Taking a step back, digging deeper into Storage I/O and VVOL’s fundamentals

    Instead of a VM host accessing its virtual disk (aka VMDK) which is stored in a VMFS formatted data store (part of ESXi hypervisor) built on top of a SCSI LUN (e.g. SAS, SATA, iSCSI, Fibre Channel aka FC, FCoE aka FC over Ethernet, IBA/SRP, etc) or an NFS file system presented by a storage system (or appliance), VVOL’s push more functionality and visibility down into the storage system. VVOL’s shift more intelligence and work from the hypervisor down into the storage system. Instead of a storage system simply presenting a SCSI LUN or NFS mount point and having limited (coarse) to no visibility into how the underlying storage bits, bytes as well as blocks are being used, storage systems gain more awareness.

    Keep in mind that even files and objects still get ultimately mapped to pages and blocks aka sectors even on nand flash-based SSD’s. However also keep an eye on some new technology such as the Seagate Kinetic drive that instead of responding to SCSI block based commands, leverage object API’s and associated software on servers. Read more about these emerging trends here and here at objectstoragecenter.com.

    With a normal SCSI LUN the underlying storage system has no knowledge of how the upper level operating system, hypervisor, file system or application such as a database (doing raw IO) is allocating the pages or blocks of memory aka storage. It is up to the upper level storage and data management tools to map from objects and files to the corresponding extents, pages and logical block address (LBA) understood by the storage system. In the case of a NAS solution, there is a layer of abstractions placed over the underlying block storage handling file management and the associated file to LBA mapping activity.

    Storage I/O basics
    Storage I/O and IOP basics and addressing: LBA’s and LBN’s

    Getting back to VVOL, instead of simply presenting a LUN which is essentially a linear range of LBA’s (think of a big table or array) that the hypervisor then manages data placement and access, the storage system now gains insight into what LBA’s correspond to various entities such as a VMDK or VMX, log, clone, swap or other VMware objects. With this more insight, storage systems can now do native and more granular functions such as clone, replication, snapshot among others as opposed to simply working on a coarse LUN basis. The similar concepts extend over to NAS NFS based access. Granted, there are more to VVOL’s including ability to get the underlying storage system more closely integrated with the virtual machine, hypervisor and associated management including supported service manage and classes or categories of service across performance, availability, capacity, economics.

    What about VVOL, VAAI and VASA?

    VVOL’s are building from earlier VMware initiatives including VAAI and VASA. With VAAI, VMware hypervisor’s can off-load common functions to storage systems that support features such as copy, clone, zero copy among others like how a computer can off-load graphics processing to a graphics card if present.

    VASA however provides a means for visibility, insight and awareness between the hypervisor and its associated management (e.g. vCenter etc) as well as the storage system. This includes storage systems being able to communicate and publish to VMware its capabilities for storage space capacity, availability, performance and configuration among other things.

    With VVOL’s VASA gets leveraged for unidirectional (e.g. two-way) communication where VMware hypervisor and management tools can tell the storage system of things, configuration, activities to do among others. Hence why VASA is important to have in your VMware CASA.

    What’s this object storage stuff?

    VVOL’s are a form of object storage access in that they differ from traditional block (LUN’s) and files (NAS volumes/mount points). However, keep in mind that not all object storage are the same as there are object storage access and architectures.

    object storage
    Object Storage basics, generalities and block file relationships

    Avoid making the mistake of when you hear object storage that means ANSI T10 (the folks that manage the SCSI command specifications) Object Storage Device (OSD) or something else. There are many different types of underlying object storage architectures some with block and file as well as object access front ends. Likewise there are many different types of object access that sit on top of object architectures as well as traditional storage system.

    Object storage I/O
    An example of how some object storage gets accessed (not VMware specific)

    Also keep in mind that there are many different types of object access mechanism including HTTP Rest based, S3 (e.g. a common industry defacto standard based on Amazon Simple Storage Service), SNIA CDMI, SOAP, Torrent, XAM, JSON, XML, DICOM, IL7 just to name a few, not to mention various programmatic bindings or application specific implementations and API’s. Read more about object storage architectures, access and related topics, themes and trends at www.objecstoragecenter.com

    Lets take a break here and when you are ready, click here to read the third piece in this series VMware VVOL’s and storage I/O fundamentals Part 2.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Server Storage I/O Network Benchmark Winter Olympic Games

    Storage I/O trends

    Server Storage I/O Network Benchmark Winter Olympic Games

    It is time for the 2014 Winter Olympic games in Sochi Russia where competitors including some athletes come together in what has become a mix of sporting and entertainment engaging activities.

    Games of inches and seconds, performance and skill

    Some of these activities including real Olympic game events are heavier on sports appeal, some with artistic and others pure entertainment with a mix of beauty, braun and maybe even a beast or two. Then there are those events that have been around since the last ice age, while others being post global warming era.

    Hence some have been around longer than others showing a mix of old, new in terms of the sports, athletes not to mention technology and their outfits.

    I mean how about some of the new snow boarding and things on skis being done, can you image if they brought in as a new "X" sport roller derby on the short speed skating track sponsored by Red Bull or Bud light? Wait, that sounds like the Red Bull Crashed Ice event (check this out if not familiar with) think motto cross, hockey, down hill on ice. How about getting some of the south African long distance sprinters to learn how to speed skate, talk about moving some gold metal as in medals back to the african continent! On the other hand, the current powers to be would lodge protest, change the benchmark or rules to stay in power, hmm, sound familiar with IT?

    Ok, enough of the fun stuff (for now), let’s get back on track here (catch that pun?).

    Metrics that matter, winners and losers

    Since these are the Olympics, lets also remember that there still awards for personal and team winners (along with second and third place), after all, if all Olympians were winners, there would be no losers and if no losers, how could there be a winner?

    Who or what decides the winners vs. losers involves metrics that matter, something that also applies to servers, storage I/O networking hardware, software and services.

    In the case of the Olympics, some of the sports or events are based on speed or how fast (e.g. time) something is done, or how much is accumulated or done in that amount of time while in other events the metrics that matter may be more of a mystery based on judging that maybe subjective.

    The technologies to record times, scores, movements and other things that go into scoring have certainly improved, as have the ability for fans to engage and vote their choice, or opposition via social media venues from twitter to face book among others.

    What about server storage I/O networking benchmarks

    There could easily be an Information Technology (IT) or data infrastructure benchmarking Olympics with events such as faster server (physical, virtual or cloud, personal or consortium team), storage, I/O and networking across hardware, software or services. Of course there would be different approaches favored by the various teams with disputes, protests and other things sometimes seen during Olympic games. One of the challenges however is what would be the metrics that matter particularly to the various marketing groups of each organization or their joint consortium?

    Just like with sports, which of the various industry trade groups or consortiums would be the ruling party or voice for a particular event specifying the competition criteria, scoring and other things. What happens when there is a break away group that launches their own competing approach yet when it comes time for the IT benchmarking Olympics, which of the various bodies does the Olympic committee defer to? In case you are not familiar with in sports there are various groups and sub-groups who can decide the participants for various supports perhaps independent of an overall group, sound like IT?

    Storage I/O trends

    Let the games begin

    So then the fun starts, however which of the events are relevant to your needs or interest, sure some are fun or entertaining while others are not practical. Some you can do yourself, while others are just fun to watch, both the thrill of victory and agony of defeat.

    This is similar to IT industry benchmarking and specmanship competitions, some of which is more relevant than others, then there are those that are entertaining.

    Likewise some benchmarks or workload claims can be reproduced to confirm the results or claims, while others remain more like the results of figure skating judges.

    Hence some of the benchmark games are more entertaining, however for those who are not aware or informed, they may turn out to be more misinformation or lead to poor decision-making.

    Consequently benchmarks and metrics that matter are those that most closely aging with what your environment is or will be doing.

    If your environment is going to be running a particularly simulation or script, than so be it, otoh, look for comparisons that are reflective.

    On the other hand, if you can’t find something that is applicable, then look at tools and results that have meaning along with relevance, not to mention that provide clarity and repeatable. Being repeatable means that you can get access to the tools, scripts or scenario (preferably free) to run in your own environment.

    There is a long list of benchmarks and workload simulation tools, as well as traces available, some for free, some for fee that apply to components, subsystems or complete application systems from server, storage I/O networking applications and hardware. These include those for Email such as Microsoft Exchange related, SQL databases, , LoginVSI for VDI, VMmark for VMware, Hadoop and HDFS related for big data among many others (see more here).

    Apples to Apples vs. Apple pie vs. Orange Jello

    Something else that matters are apples to apples vs. apples to oranges or worse, apple pie to orange Jello.

    This means knowing or gaining insight into the pieces as we as how they behave under different conditions as well as the entire system for a baseline (e.g normal) vs. abnormal.

    Hence its winter server storage I/O networking benchmark games with the first event having been earlier this week with team Brocade taking on Cisco. Here is a link to a post by Tony Bourke (@tbourke) that provides some interesting perspectives and interactions, along with a link here to the Brocade sponsored report done by Evaluator Group.

    In this match-up, Team Brocade (with HP servers, Brocade switches and an unnamed 16GFC SSD storage system) take on Team Cisco and their UCS (also an un-named 16GFC SSD system that I wonder if Cisco even knows whose’s it was?). Ironic that it was almost six years to the date that there was a similar winter benchmark wonder event when NetApp submitted an SPC result for EMC (read more about that cold day here).

    The Brocade FC (using HP servers and somebody’s SSD storage) vs. Cisco FCoE using UCS (and somebody else’s storage) comparison is actually quite entertaining, granted it can also be educational on what to do or not do, focus on or include among others things. The report also raises many questions that seem more wondering why somebody won in an ice figuring skating event vs. the winner of a men’s or women’s hockey game.

    Closing thoughts (for now)

    So here’s my last point and perspective, let’s have a side of context with them IOPs, TPS, bandwidth and other metrics that matter.

    Take metrics and benchmarks with a grain of salt however look for transparency in both how they are produced, information provided and most important, does it matter or is it relevant to your environment or simply entertaining.

    Lets see what the next event in the ongoing server storage I/O networking benchmark 2014 winter Olympic games will be.

    Some more reading:
    SPC and Storage Benchmarking Games
    Moving Beyond the Benchmark Brouhaha
    More storage and IO metrics that matter
    Its US Census time, What about IT Data Centers?
    March Metrics and Measuring Social Media (keep in mind that March Madness is just around the corner)
    PUE, Are you Managing Power, Energy or Productivity?

    How many IOPS can a HDD, HHDD or SSD do?
    Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?

    You can also take part in the on-going or re-emerging FC vs. FCoE hype and fud events by casting your vote here and see results below.

    Note the following poll is from a previous StorageIOblog post (Where has the FCoE hype and FUD gone? (with poll)).

    Disclosure: I used to work for Evaluator Group after working for a company called Inrange that competed with, then got absorbed (via CNT and McData) into Brocade who has been a client as has Cisco. I also do performance and functionality testing, audits, validation and proof of concepts services in my own as well as in client labs using various industry standard available tools and techniques. Otoh, not sure that I even need to disclose anything however its easy enough to do so why not ;).

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Fall 2013 StorageIO Update Newsletter

    Storage I/O trends

    Fall 2013 StorageIO Update Newsletter

    Welcome to the Fall 2013 (joint September and October) edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics. It is fall (at least here in north America) which means conferences, symposium, virtual and physical events, seminars, webinars in addition to normal client project activities. Starting with VMworld back in late August, that event occurred in San Francisco which kicked off the fall (or back to school) season of activity. VMworld was followed with many other events including in-person along with virtual or on-line such as webinars, Google+ hangouts among others, not to mention all the briefings for vendor product announcements and updates. Check out the industry trends perspectives articles, comments and blog posts below that covers some activity over the past few months.

    VMworld 2013
    Congratulations to VMworld on the 10th anniversary of the event. With the largest installment yet of a VMworld in terms of attendance, there were also many announcements. Here are a synopsis of some of those announcements which of course included plenty of software defined marketing (SDM).

    CMG and Storage Performance
    During mid-September I was invited to give an industry trends and perspectives presentation to the Storage Performance Council (SPC) board. The SPC board were meeting in the Minneapolis area and I gave a brief talk about Metrics that Matter and importance of context with focus on applications. Speaking of the Minneapolis area, Tom Becchetti (@tbecchetti) organized a great CMG event hosted over at Blue Cross Blue Shield of Minnesota. I gave a discussion around Technolutionary, technology evolution and revolution, using old and new things in new ways.

    Check out our backup, restore, BC, DR and archiving (Under the resources section on StorageIO.com) for various presentation, book chapter downloads and other content.

    SNW Fall 2013 Long Beach
    Talking about traveling, there was a quick trip out to Long Beach for the fall 2013 edition of Storage Networking World (SNW) where I had some good meetings and conversations with those who were actually there. No need to sugar coat it, likewise no need to kick sand in its face. Plain and simple, SNW is not the event it used to be has been a common discussion theme for several years which I had set my expectation accordingly.

    Some have asked me why I even spent time, money and resources to attend SNW?

    My answer is that I had some meetings to attend to, wanted to see and meet with others who were going to be there, and perhaps even say goodbye to an event that I have been involved with for over a decade.

    Does that mean I’m all done with SNW?

    Not sure yet as will have to wait and see what SNIA and IDG/Computerworld the event co-owners and producers put together for future events. However there are enough other events and activities to pick up the slack which is part of what has caused the steady decline in events like SNW among others.

    Perhaps it is time for SNIA to partner with another adjacent yet like-minded organization such as CMG to collaborate and try doing something like what was done in the early 2000s? That is SNIA providing their own seminars along with others such as myself who involved with both CMG, SNW and SNIA to beef up or set up a storage and I/O focused track at the CMG event.

    Beyond those items mentioned above, or in the following section, there are plenty of interesting and exciting things occurring in the background that I cant talk about yet. However watch for future posts, commentary, perspectives and other information down the road (and in the not so distant future).

    Enjoy this edition of the StorageIO Update newsletter.

    Ok, nuff said (for now)

    Cheers gs

    StorageIO Industry Trends and PerspectivesIndustry trends perspectives and commentary
    What is being seen, heard and talked about while out and about

    The following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes.

    Storage I/O trends

    InfoStor: Perspectives on Data Dynamics file migration tool (Read more about StorageX later in this newsletter)
    SearchStorage: Perspectives on Data Dynamics resurrects StorageX for file migration
    SearchStorage: Perspectives on Cisco buying SSD storage vendor Whiptail

    Recent StorageIO Tips and Articles in various venues:

    21cIT:  Why You Should Consider Object Storage
    InfoStor:  HDDs Are Still Spinning (Rust Never Sleeps)
    21cIT:  Object Storage Is in Your Future, Even if You Use Files
    21cIT:  Playing the Name Game With Virtual Storage
    InfoStor:  Flash Data Storage: Myth vs. Reality
    InfoStor:  The Nand Flash Cache SSD Cash Dance
    SearchEnterpriseWAN:  Remote Office / ROBO backup and data protection for networking Pro’s
    TheVirtualizationPractice:  When and Where to use NAND Flash SSD for Virtual Servers
    FedTech:  These Data Center (DCIM) Tools Can Streamline Computing Resources

    Storage I/O posts

    Recent StorageIO blog post:

    Seagate Kinetic Cloud and Object Storage I/O platform (and Ethernet HDD)
    Cloud conversations: Has Nirvanix shutdown caused cloud confidence concerns?
    Cisco buys Whiptail continuing the SSD storage I/O flash cash cache dash
    WD buys nand flash SSD storage I/O cache vendor Virident
    EMC New VNX MCx doing more storage I/O work vs. just being more
    Is more of something always better? Depends on what you are doing
    VMworld 2013 Vmware, server, storage I/O and networking update (Day 1)
    EMC ViPR software defined object storage part II

    Check out our objectstoragecenter.com page where you will find a growing collection of information and links pertaining to cloud and object storage themes, technologies and trends.

    Brouwer Storage Consultancy

    StorageIO in Europe (Netherlands)
    Spent over a week in the Netherlands where I presented three different seminar workshop sessions organized by Brouwer Storage Consultancy who is celebrating their 10th anniversary in business. These sessions spanned five full days of interactive discussions with an engaged diverse group of attendees in the Nijkerk area who came from across Holland to take part in these workshops.

    Congratulations to Gert and Frank Brouwer on their ten years of being in business and best wishes for many more. Fwiw those who are curious StorageIO will be ten years young in business in about two years.

    StorageIO Industry Trends and Perspectives

    Some observations from while in Europe:

    Continued cloud privacy concerns amplified by NSA and suspicion of US-based companies, yet many are not aware of similar concerns of European or UK-based firms from those outside those areas. While there were some cloud concern conversations over the demise of Nirvanix, those seemed less so then in the media or US given that at least in Holland they have seen other cloud and storage as a service firms come and go already. It should be noted that the US has also seen cloud and storage as a service startups come and go, however I think sometimes we or at least the media tends to have a short if not selective memory at times.

    In one of our workshops sessions we were talking about service level objectives (SLO), service level agreements (SLA), recovery point objectives (RPO) and recovery time objectives (RTO) among other themes. Somebody mentioned why the focus of time in RPO and questions why not a transactional perspective which I thought was a brilliant question. We had a good conversation in the group and concurred that while RPO is what the industry uses, that there also needs to be a transactional state context tie to what is inferred or assumed with RPO and RTO. Thus the importance of looking beyond just the point in time, however the importance of a transactional context or state, such as not just the time, however to a given transactional point.

    Note that transactional could mean database, file system, backup or data protection index or catalog, meta data repository or other entity. This is where some should be jumping up and down like Donkey in Shrek wanting to point out that is exactly what RTO and RPO refer to which would be great. However all to often what is assumed is not conveyed, thus those who don’t know, well, they assume or simply don’t know what others.

    StorageIO Industry Trends and Perspectives

    Data Dynamics StorageX 7.0 Intelligent Policy Based File Data Migration – There is no such thing as a data or information recession . Likewise, people and data are living longer as well as getting larger. These span various use cases from traditional to personal or at work productivity. From little to big data content, collaboration including file or document sharing to rich media applications all of which are leveraging unstructured data. For example, email, word processing back-office documents, web and text files, presentations (e.g. PowerPoint), photos, audio and video among others. These macro trends result in the continued growth of unstructured Network Attached Storage (NAS) file data.

    Thus, a common theme is adding management including automated data movement and migration to carry out structure around unstructured NAS file data. More than a data mover or storage migration tool, Data Dynamics StorageX is a software platform for adding storage management structure around unstructured local and distributed NAS file data. This includes heterogeneous vendor support across different storage system, protocols and tools including Windows CIFS and Unix/Linux NFS.
    (Disclosure DataDynamics has been a StorageIO client). Visit Data Dynamics at www.datadynamicsinc.com/

    Server and StorageIO seminars, conferences, web cats, events, activities StorageIO activities (out and about)

    Seminars, symposium, conferences, webinars
    Live in person and recorded recent and upcoming events

    Announcing: Backup.U brought to you by Dell

    Some on-line (live and recorded) events have include an ongoing series tied to data protection (Backup/restore, HA, BC, DR and Archiving) called Backup.U organized and sponsored by Dell Data Protection Software that you can learn more about at the landing page www.software.dell.com/backupu (more on this in a future post). In addition to data protection, some other events and activities including a BrightTalk webinar on storage I/O and networking for cloud environments (here).

    In addition to the above, check out the StorageIO calendar to see more recent and upcoming activities.

    Watch for more 2013 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, big data, little data, cloud and object storage, performance and management trends among others.

    Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events.

    If you missed the Summer (July and August) 2013 StorageIO update newsletter, click here to view that and other previous editions as HTML or PDF versions. Subscribe to this newsletter (and pass it along)

    and click here to subscribe to this news letter. View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Seagate Kinetic Cloud and Object Storage I/O platform (and Ethernet HDD)

    Storage I/O trends

    Seagate Kinetic Cloud and Object Storage I/O platform

    Seagate announced today their Kinetic platform and drive designed for use by object API accessed storage including for cloud deployments. The Kinetic platform includes Hard Disk Drives (HDD) that feature 1Gb Ethernet (1 GbE) attached devices that speak object access API or what Seagate refers to as a key / value.

    Seagate Kinetic architecture

    What is being announced with Seagate Kinetic Cloud and Object (Ethernet HDD) Storage?

    • Kinetic Open Storage Platform – Ethernet drives, key / value (object access) API, partner software
    • Software developer’s kits (SDK) – Developer tools, documentation, drive simulator, code libraries, code samples including for SwiftStack and Riak.
    • Partner ecosystem

    What is Kinetic?

    While it has 1 GbE ports, do not expect to be able to use those for iSCSI or NAS including NFS, CIFS or other standard access methods. Being Ethernet based, the Kinetic drive only supports the key value object access API. What this means is that applications, cloud or object stacks, key value and NoSQL data repositories, or other software that adopt the API can communicate directly using object access.

    Seagate Kinetic storage

    Internal, the HDD functions as a normal drive would store and accessing data, the object access function and translation layer shifts from being in an Object Storage Device (OSD) server node to inside the HDD. The Kinetic drive takes on the key value API personality over 1 GbE ports instead of traditional Logical Block Addressing (LBA) and Logical Block Number (LBN) access using 3g, 6g or emerging 12g SAS or SATA interfaces. Instead Kinetic drives respond to object access (aka what Seagate calls key / value) API commands such as Get, Put among others. Learn more about object storage, access and clouds at www.objectstoragecenter.com.

    Storage I/O trends

    Some questions and comments

    Is this the same as what was attempted almost a decade ago now with the T10 OSD drives?

    Seagate claims no.

    What is different this time around with Seagate doing a drive that to some may vaguely resemble the predecessor failed T10 OSD approach?

    Industry support for object access and API development have progressed from an era of build it and they will come thinking, to now where the drives are adapted to support current cloud, object and key value software deployment.

    Wont 1GbE ports be too slow vs. 12g or 6g or even 3g SAS and SATA ports?

    Keep in mind those would be apples to oranges comparisons based on the protocols and types of activity being handled. Kinetic types of devices initially will be used for large data intensive applications where emphasis is on storing or retrieving large amounts of information, vs. low latency transactional. Also, keep in mind that one of the design premises is to keep cost low, spread the work over many nodes, devices to meet those goals while relying on server-side caching tools.

    Storage I/O trends

    Does this mean that the HDD is actually software defined?

    Seagate or other HDD manufactures have not yet noticed the software defined marketing (SDM) bandwagon. They could join the software defined fun (SDF) and talk about a software defined disk (SDD) or software defined HDD (SDHDD) however let us leave that alone for now.

    The reality is that there is far more software that exists in a typical HDD than what is realized. Sure some of that is packaged inside ASICs (Application Specific Integrated Circuits) or running as firmware that can be updated. However, there is a lot of software running in a HDD hence the need for power yet energy-efficient processors found in those devices. On a drive per drive basis, you may see a Kinetic device consume more energy vs. other equivalence HDDs due to the increase in processing (compute) needed to run the extra software. However that also represents an off-load of some work from servers enabling them to be smaller or do more work.

    Are these drives for everybody?

    It depends on if your application, environment, platform and technology can leverage them or not. This means if you view the world only through what is new or emerging then these drives may be for all of those environments, while other environments will continue to leverage different drive options.

    Object storage access

    Does this mean that block storage access is now dead?

    Not quite, after all there is still some block activity involved, it is just that they have been further abstracted. On the other hand, many applications, systems or environments still rely on block as well as file based access.

    What about OpenStack, Ceph, Cassandra, Mongo, Hbase and other support?

    Seagate has indicated those and others are targeted to be included in the ecosystem.

    Seagate needs to be careful balancing their story and message with Kinetic to play to and support those focused on the new and emerging, while also addressing their bread and butter legacy markets. The balancing act is communicating options, flexibility to choose and adopt the right technology for the task without being scared of the future, or clinging to the past, not to mention throwing the baby out with the bath water in exchange for something new.

    For those looking to do object storage systems, or cloud and other scale based solutions, Kinetic represents a new tool to do your due diligence and learn more about.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Summer 2013 Server and StorageIO Update Newsletter

    StorageIO 2013 Summer Newsletter

    Cloud, Virtualization, SSD, Data Protection, Storage I/O

    Welcome to the Summer 2013 (combined July and August) edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics.

    StorageIO News Letter Image
    Summer 2013 News letter

    This summer has been far from quiet on the merger and acquisitions (M&E) front with Western Digital (WD) continuing its buying spree including Stec among others. There is the HDS Mid Summer Storage and Converged Compute Enhancements and EMC Evolves Enterprise Data Protection with Enhancements (Part I and Part II).

    With VMworld just around the corner along with many other upcoming events, watch for more announcements to be covered in future editions and on StorageIOblog as we move into fall.

    Click on the following links to view the Summer 2013 edition as (HTML sent via Email) version, or PDF versions. Visit the news letter page to view previous editions of the StorageIO Update.

    You can subscribe to the news letter by clicking here

    Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

    Ok Nuff said, for now

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: IBM Server Side Storage I/O SSD Flash Cache Software

    Storage I/O trends

    Part II IBM Server Flash Cache Storage I/O accelerator for SSD

    This is the second in a two-part post series on IBM’s Flash Cache Storage Accelerator (FCSA) for Solid State Device (SSD) storage announced today. You can view part I of the IBM FCSA announcement synopsis here.

    Some FCSA ssd cache questions and perspectives

    What is FCSA?
    FCSA is a server-side storage I/O or IOP caching software tool that makes use of local (server-side) nand flash SSD (PCIe cards or drives). As a cache tool (view IBM flash site here) FCSA provides persistent read caching on IBM servers (xSeries, Flex and Blade x86 based systems) with write through cache (e.g. data cached for later reads) while write data is written directly to block attached storage including SANs. back-end storage can be iSCSI, SAS, FC or FCoE based block systems from IBM or others including all SSD, hybrid SSD or traditional HDD based solutions from IBM and others.

    How is this different from just using a dedicated PCIe nand flash SSD card?
    FCSA complements those by using them as a persistent storage to cache storage I/O reads to boost performance. By using the PCIe nand flash card or SSD drives, FCSA and other storage I/O cache optimization tools free up valuable server-side DRAM from having to be used as a read cache on the servers. On the other hand, caching tools such as FCSA also keep local cached reads closer to the applications on the servers (e.g. locality of reference) reducing the impact on backed shared block storage systems.

    What is FCSA for?
    With storage I/O or IOPS and application performance in general, location matters due to locality of reference hence the need for using different approaches for various environments. IBM FCSA is a storage I/O caching software technology that reduces the impact of applications having to do random read operations. In addition to caching reads, FCSA also has a write-through cache, which means that while data written to back-end block storage including on iSCSI, SAS, FC or FCoE based storage (IBM or other vendors), a copy of the data is cached for later reads. Thus while the best storage I/O is the one that does not have to be done (e.g. can be resolved from cache), the second best would be writes that go to a storage system that are not competing with read requests (handled via cache).

    Storage I/O trends

    Who else is doing this?
    This is similar to what EMC initially announced and released in February 2012 with VFcache now renamed to be XtremSW along with other caching and IO optimization software from others (e.g. SANdisk, Proximal and Pernix among others.

    Does this replace IBM EasyTier?
    Simple answer is no, one is for tiering (e.g. EasyTier), the other is for IO caching and optimization (e.g. FCSA).

    Does this replace or compete with other IBM SSD technologies?
    With anything, it is possible to find a way to make or view it as competitive. However in general FCSA complements other IBM storage I/O optimization and management software tools such as EasyTier as well as leverage and coexist with their various SSD products (from PCIe cards to drives to drive shelves to all SSD and hybrid SSD solutions).

    How does FCSA work?
    The FCSA software works in either a physical machine (PM) bare metal mode with Microsoft Windows operating systems (OS) such as Server 2008, 2012 among others. There is also *nix support for RedHat Linux, along with in a VMware virtual machine (VM) environment. In a VMware environment High Availability (HA), DRS and VMotion services and capabilities are supported. Hopefully it will be sooner vs. later that we hear IBM do a follow-up announcement (pure speculation and wishful thinking) on more hypervisors (e.g. Hyper-V, Xen, KVM) support along with Centos, Ubuntu or Power based systems including IBM pSeries. Read more about IBM Pure and Flex systems here.

    What about server CPU and DRAM overhead?
    As should be expected, a minimal amount of server DRAM (e.g. main memory) and CPU processing cycles are used to support the FCSA software and its drivers. Note the reason I say as should be expected is how you can have software running on a server doing any type of work that does not need some amount of DRAM and processing cycles. Granted some vendors will try to spin and say that there is no server-side DRAM or CPU consumed which would be true if they are completely external to the server (VM or PM). The important thing is to understand how much of an impact in terms of CPU along with DRAM consumed along with their corresponding effectiveness benefit that are derived.

    Storage I/O trends

    Does FCSA work with NAS (NFS or CIFS) back-end storage?
    No this is a server-side block only cache solution. However having said that, if your applications or server are presenting shared storage to others (e.g. out the front-end) as NAS (NFS, CIFS, HDFS) using block storage (back-end), then FCSA can cache the storage I/O going to those back-end block devices.

    Is this an appliance?
    Short and simple answer is no, however I would not be surprised to hear some creative software defined marketer try to spin it as a flash cache software appliance. What this means is that FCSA is simply IO and storage optimization software for caching to boost read performance for VM and PM servers.

    What is this hardware or storage agnostic stuff mean?
    Simple, it means that FCSA can work with various nand flash PCIe cards or flash SSD drives installed in servers, as well as with various back-end block storage including SAN from IBM or others. This includes being able to use block storage using iSCSI, SAS, FC or FCoE attached storage.

    What is the difference between Easytier and FCSA?
    Simple, FCSA is providing read acceleration via caching which in turn should offload some reads from affecting storage systems so that they can focus on handling writes or read ahead operations. Easytier on the other hand is for as its name implies tiering or movement of data in a more deterministic fashion.

    How do you get FCSA?
    It is software that you buy from IBM that runs on an IBM x86 based server. It is licensed on a per server basis including one-year service and support. IBM has also indicated that they have volume or multiple servers based licensing options.

    Storage I/O trends

    Does this mean IBM is competing with other software based IO optimization and cache tool vendors?
    IBM is focusing on selling and adding value to their server solutions. Thus while you can buy the software from IBM for their servers (e.g. no bundling required), you cannot buy the software to run on your AMD/Seamicro, Cisco (including EMC/VCE and NetApp) , Dell, Fujitsu, HDS, HP, Lenovo, Oracle, SuperMicro among other vendors servers.

    Will this work on non-IBM servers?
    IBM is only supporting FCSA on IBM x86 based servers; however, you can buy the software without having to buy a solution bundle (e.g. servers or storage).

    What is this Cooperative Caching stuff?
    Cooperative caching takes the next step from simple read cache with write-through to also support chance coherency in a shared environment, as well as leverage tighter application or guest operating system and storage system integration. For example, applications can work with storage systems to make intelligent predictive informed decisions on what to pre-fetch or read ahead and cached, as well as enable cache warming on restart. Another example is where in a shared storage environment if one server makes a change to a shared LUN or volume that the local server-side caches are also updated to prevent stale or inconsistent reads from occurring.

    Can FCSA use multiple nand flash SSD devices on the same server?
    Yes, IBM FCSA supports use of multiple server-side PCIe and or drive based SSD devices.

    How is cache coherency maintained including during a reboot?
    While data stored in the nand flash SSD device is persistent, it’s up to the server and applications working with the storage systems to decide if there is coherent or stale data that needs to be refreshed. Likewise, since FCSA is server-side and back-end storage system or SAN agnostic, without cooperative caching it will not know if the underlying data for a storage volume changed without being notified from another server that modified it. Thus if using shared back-end including SAN storage, do your due diligence to make sure multi-host access to the same LUN’s or volumes is being coordinated with some server-side software to support cache coherency, something that would apply to all vendors.

    Storage I/O trends

    What about cache warming or reloading of the read cache?
    Some vendors who have tightly interested caching software and storage systems, something IBM refers to as cooperative caching that can have the ability to re-warm the cache. With solutions that support cache re-warming, the cache software and storage systems work together to main cache coherency while pre-loading data from the underlying storage system based on hot bands or other profiles and experience. As of this announcement, FCSA does not support cache warming on its own.

    Does IBM have service or tools to complement FCSA?
    Yes, IBM has an assessment, profile and planning tool that are available on a free consultation services basis with a technician to check your environment. Of course, the next logical step would be for IBM to make the tool available via free download or on some other basis as well.

    Do I recommend and have I tried FCSA?
    On paper, or WebEx, YouTube or other venue FCSA looks interesting and capable, a good fit for some environments particular if IBM server-based. However since my PM and VMware VM based servers are from other vendors, along with the fact that FCSA only runs on IBM servers, have not actually given it a hands on test drive yet. Thus if you are looking at storage I/O optimization and caching software tools for your VM or PM environment, checkout IBM FCSA to see if it meets your needs.

    Storage I/O trends

    General comments

    It is great to see server and storage systems vendors add value to their solutions with I/O and performance optimization as well as caching software tools. However, I am also concerned with the growing numbers of different software tools that only work with one vendor’s servers or storage systems, or at least are supported as such.

    This reminds me of a time not all that long ago (ok, for some longer than others) when we had a proliferation of different host bus adapter (HBA) driver and pathing drivers from various vendors. The result is a hodge podge (a technical term) of software running on different operating systems, hypervisors, PM’s, VMs, and storage systems, all of which need to be managed. On the other hand, for the time being perhaps the benefit will outweigh the pain of having different tools. That is where there are options from server-side vendor centric, storage system focused, or third-party software tool providers.

    Another consideration is that some tools work in VMware environments; others support multiple hypervisors while others also support bare metal servers or PMs. Which applies to your environment will of course depend. After all, if you are an all VMware environment given that many of the caching tools tend to be VMware focused, that gives more options vs. for those who are still predominately PM environments.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Server and Storage IO Memory: DRAM and nand flash

    Storage I/O trends

    DRAM, DIMM, DDR3, nand flash memory, SSD, stating what’s often assumed

    Often what’s assumed is not always the case. For example in along with around server, storage and IO networking circles including virtual as well as cloud environments terms such as nand (Negated AND or NOT And) flash memory aka (Solid State Device or SSD), DRAM (Dynamic Random Access Memory), DDR3 (Double Data Rate 3) not to mention DIMM (Dual Inline Memory Module) get tossed around with the assumption everybody must know what they mean.

    On the other hand, I find plenty of people who are not sure what those among other terms or things are, sometimes they are even embarrassed to ask, particular if they are a self-proclaimed expert.

    So for those who need a refresh or primer, here you go, an excerpt from Chapter 7 (Servers – Physical, Virtual and Software) from my book "The Green and Virtual Data Center" (CRC Press) available at Amazon.com and other global venues in print and ebook formats.

    7.2.2 Memory

    Computers rely on some form of memory ranging from internal registers, local on-board processor Level 1 (L1) and Level 2 (L2) caches, random accessible memory (RAM), non-volatile RAM (NVRAM) or nand Flash (SSD) along with external disk storage. Memory, which includes external disk storage, is used for storing operating system software along with associated tools or utilities, application programs and data. Main memory or RAM, also known as dynamic RAM (DRAM) chips, is packaged in different ways with a common form being dual inline memory modules (DIMMs) for notebook or laptop, desktop PC and servers.

    RAM main memory on a server is the fastest form of memory, second only to internal processor or chip based registers, L1, L2 or local memory. RAM and processor based memories are volatile and non-persistent in that when power is removed, the contents of memory are lost. As a result, some form of persistent memory is needed to keep programs and data when power is removed. Read only memory (ROM) and NVRAM are both persistent forms of memory in that their contents are not lost when power is removed. The amount of RAM that can be installed into a server will vary with specific architecture implementation and operating software being used. In addition to memory capacity and packaging format, the speed of memory is also important to be able to move data and programs quickly to avoid internal bottlenecks. Memory bandwidth performance increases with the width of the memory bus in bits and frequency in MHz. For example, moving 8 bytes on a 64 bit buss in parallel at the same time at 100MHz provides a theoretical 800MByte/sec speed.

    To improve availability and increase the level of persistence, some servers include battery backed up RAM or cache to protect data in the event of a power loss. Another technique to protect memory data on some servers is memory mirroring where twice the amount of memory is installed and divided into two groups. Each group of memory has a copy of data being stored so that in the event of a memory failure beyond those correctable with standard parity and error correction code (ECC) no data is lost. In addition to being fast, RAM based memories are also more expensive and used in smaller quantities compared to external persistent memories such as magnetic hard disk drives, magnetic tape or optical based memory medias.

    Memory diagram
    Memory and Storage Pyramid

    The above shows a tiered memory model that may look familiar as the bottom part is often expanded to show tiered storage. At the top of the memory pyramid is high-speed processor memory followed by RAM, ROM, NVRAM and FLASH along with many forms of external memory commonly called storage. More detail about tiered storage is covered in chapter 8 (Data Storage – Data Storage – Disk, Tape, Optical, and Memory). In addition to being slower and lower cost than RAM based memories, disk storage along with NVRAM and FLASH based memory devices are also persistent.

    By being persistent, when power is removed, data is retained on the storage or memory device. Also shown in the above figure is that on a relative basis, less energy is used for power storage or memory at the bottom of the pyramid than for upper levels where performance increases. From a PCFE (Power, Cooling, Floor space, Economic) perspective, balancing memory and storage performance, availability, capacity and energy to a given function, quality of service and service level objective for a given cost needs to be kept in perspective and not considering simply the lowest cost for the most amount of memory or storage. In addition to gauging memory on capacity, other metrics include percent used, operating system page faults and page read/write operations along with memory swap activity as well memory errors.

    Base 2 versus base 10 numbering systems can account for some storage capacity that appears to “missing” when real storage is compared to what is expected to be seen. Disk drive manufacturers use base 10 (decimal) to count bytes of data while memory chip, server and operating system vendors typically use base 2 (binary) to count bytes of data. This has led to confusion when comparing a disk drive base 10 GB with a chip memory base 2 GB of memory capacity, such as 1,000,000,000 (10^9) bytes versus 1,073,741,824 (2^30) bytes. Nomenclature based on the International System of Units uses MiB, GiB and TiB to denote million, billion and trillion bytes for base 2 numbering with base 10 using MB, TB and GB . Most vendors do document how many bytes, sometimes in both base 2 and base 10, as well as the number of 512 byte sectors supported on their storage devices and storage systems, though it might be in the small print.

    Related more reading:
    How much storage performance do you want vs. need?
    Can RAID extend the life of nand flash SSD?
    Can we get a side of context with them IOPS and other storage metrics?
    SSD & Real Estate: Location, Location, Location
    What is the best kind of IO? The one you do not have to do
    SSD, flash and DRAM, DejaVu or something new?

    Ok, nuff said (for now).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier).

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    How much storage performance do you want vs. need?

    Storage I/O trends

    How much storage I/O performance do you want vs. need?

    The answer to how much storage I/O performance you need vs. want probably depends on cost, for which applications along with benefit among other things.

    Storage I/O performance
    View Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?

    I did a piece over at 21cit titled Parsing the Need for Speed in Storage that looks at those and other related themes including metrics that matter across tiered storage.

    Here is an excerpt:

    Can storage speed be too fast? Or, put another away, how do you decide a return on investments or innovation from the financial resources you spend on storage and the various technologies that go into storage performance.

    Think about it: Fast storage needs fast servers, IO and networking interfaces, software, firmware, hypervisors, operating systems, drivers, and a file system or database, along with applications. Then there are the other buzzword bingo technologies that are also factors, among them fast storage DRAM and flash Solid State Devices (SSD).

    Some questions to ask about storage I/O performance include among others:

    • How do response time, latency, and think or wait-times effect your environment and applications?
    • Do you know the location of your storage or data center performance bottlenecks?
    • If you remove bottlenecks in storage systems or appliances as well as in the data path, how will your application or the CPU in the server it runs on behave?
    • If your application server is currently showing high CPU due to the system overhead of having to wait for storage I/Os, you may see a positive improvement.
    • If more real work can be done now, will all of the components be ready to support each other without creating a new bottleneck?
    • Also speaking of storage I/O performance, how about can we get a side of context with them IOPs and other metrics that matter!

    So how about it, how much performance, for primary, secondary, backup, cloud or virtual storage do you want vs. need?

    Ok, nuff said for now.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved