Announcing Software Defined Data Infrastructure Essentials Book by Greg Schulz

New SDDI Essentials Book by Greg Schulz of Server StorageIO

Cloud, Converged, Virtual Fundamental Server Storage I/O Tradecraft

server storage I/O data infrastructure trends

Update 1/21/2018

Over the past several months I have posted, commenting, presenting and discussing more about Software Defined Data Infrastructure Essentials aka SDDI or SDDC and SDI. Now it is time to announce my new book (my 4th solo project), Software Defined Data Infrastructure Essentials Book (CRC Press). Software Defined Data Infrastructure Essentials is now generally available at various global venues in hardcopy, hardback print as well as various electronic versions including via Amazon and CRC Press among others. For those attending VMworld 2017 in Las Vegas, I will be doing a book signing, meet and greet at 1PM Tuesday August 29 in the VMworld book store, as well as presenting at various other fall industry events.

Software Defined Data Infrastructure Essentials Book Announcement

(Via Businesswire) Stillwater, Minnesota – August 23, 2017  – Server StorageIO, a leading independent IT industry advisory and consultancy firm, in conjunction with publisher CRC Press, a Taylor and Francis imprint, announced the release and general availability of “Software-Defined Data Infrastructure Essentials,” a new book by Greg Schulz, noted author and Server StorageIO founder.

Software Defined Data Infrastructure Essentials

The Software Defined Data Infrastructure Essentials book covers physical, cloud, converged (and hyper-converged), container, and virtual server storage I/O networking technologies, revealing trends, tools, techniques, and tradecraft skills.

Data Infrastructures Protect Preserve Secure and Serve Information
Various IT and Cloud Infrastructure Layers including Data Infrastructures

From cloud web scale to enterprise and small environments, IoT to database, software-defined data center (SDDC) to converged and container servers, flash solid state devices (SSD) to storage and I/O networking,, the book helps develop or refine hardware, software, services and management experiences, providing real-world examples for those involved with or looking to expand their data infrastructure education knowledge and tradecraft skills.

Software Defined Data Infrastructure Essentials book topics include:

  • Cloud, Converged, Container, and Virtual Server Storage I/O networking
  • Data protection (archive, availability, backup, BC/DR, snapshot, security)
  • Block, file, object, structured, unstructured and data value
  • Analytics, monitoring, reporting, and management metrics
  • Industry trends, tools, techniques, decision making
  • Local, remote server, storage and network I/O troubleshooting
  • Performance, availability, capacity and  economics (PACE)

Where To Purchase Your Copy

Order via Amazon.com and CRC Press along with Google Books among other global venues.

What People Are Saying About Software Defined Data Infrastructure Essentials Book

“From CIOs to operations, sales to engineering, this book is a comprehensive reference, a must-read for IT infrastructure professionals, beginners to seasoned experts,” said Tom Becchetti, advisory systems engineer.

"We had a front row seat watching Greg present live in our education workshop seminar sessions for ITC professionals in the Netherlands material that is in this book. We recommend this amazing book to expand your converged and data infrastructure knowledge from beginners to industry veterans."

Gert and Frank Brouwer – Brouwer Storage Consultancy

"Software-Defined Data Infrastructures provides the foundational building blocks to improve your craft in several areas including applications, clouds, legacy, and more.  IT professionals, as well as sales professionals and support personal, stand to gain a great deal by reading this book."

Mark McSherry- Oracle Regional Sales Manager

"Greg Schulz has provided a complete ‘toolkit’ for storage management along with the background and framework for the storage or data infrastructure professional (or those aspiring to become one)."
Greg Brunton – Experienced Storage and Data Management Professional

“Software-defined data infrastructures are where hardware, software, server, storage, I/O networking and related services converge inside data centers or clouds to protect, preserve, secure and serve applications and data,” said Schulz.  “Both readers who are new to data infrastructures and seasoned pros will find this indispensable for gaining and expanding their knowledge.”

SDDI and SDDC components

More About Software Defined Data Infrastructure Essentials
Software Defined Data Infrastructures (SDDI) Essentials provides fundamental coverage of physical, cloud, converged, and virtual server storage I/O networking technologies, trends, tools, techniques, and tradecraft skills. From webscale, software-defined, containers, database, key-value store, cloud, and enterprise to small or medium-size business, the book is filled with techniques, and tips to help develop or refine your server storage I/O hardware, software, Software Defined Data Centers (SDDC), Software Data Infrastructures (SDI) or Software Defined Anything (SDx) and services skills. Whether you are new to data infrastructures or a seasoned pro, you will find this comprehensive reference indispensable for gaining as well as expanding experience with technologies, tools, techniques, and trends.

Software Defined Data Infrastructure Essentials SDDI SDDC content

This book is the definitive source providing comprehensive coverage about IT and cloud Data Infrastructures for experienced industry experts to beginners. Coverage of topics spans from higher level applications down to components (hardware, software, networks, and services) that get defined to create data infrastructures that support business, web, and other information services. This includes Servers, Storage, I/O Networks, Hardware, Software, Management Tools, Physical, Software Defined Virtual, Cloud, Docker, Containers (Docker and others) as well as Bulk, Block, File, Object, Cloud, Virtual and software defined storage.

Additional topics include Data protection (Availability, Archiving, Resiliency, HA, BC, BR, DR, Backup), Performance and Capacity Planning, Converged Infrastructure (CI), Hyper-Converged, NVM and NVMe Flash SSD, Storage Class Memory (SCM), NVMe over Fabrics, Benchmarking (including metrics matter along with tools), Performance Capacity Planning and much more including whos doing what, how things work, what to use when, where, why along with current and emerging trends.

Book Features

ISBN-13: 978-1498738156
ISBN-10: 149873815X
Hardcover: 672 pages
(Available in Kindle and other electronic formats)
Over 200 illustrations and 70 plus tables
Frequently asked Questions (and answers) along with many tips
Various learning exercises, extensive glossary and appendices
Publisher: Auerbach/CRC Press Publications; 1 edition (June 19, 2017)
Language: English

SDDI and SDDC toolbox

Where To Learn More

Learn more about related technology, trends, tools, techniques, and tips with the following links.

Data Infrastructures Protect Preserve Secure and Serve Information
Various IT and Cloud Infrastructure Layers including Data Infrastructures

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Data Infrastructures exist to protect, preserve, secure and serve information along with the applications and data they depend on. With more data being created at a faster rate, along with the size of data becoming larger, increased application functionality to transform data into information means more demands on data infrastructures and their underlying resources.

Software-Defined Data Infrastructure Essentials: Cloud, Converged, and Virtual Fundamental Server Storage I/O Tradecraft is for people who are currently involved with or looking to expand their knowledge and tradecraft skills (experience) of data infrastructures. Software-defined data centers (SDDC), software data infrastructures (SDI), software-defined data infrastructure (SDDI) and traditional data infrastructures are made up of software, hardware, services, and best practices and tools spanning servers, I/O networking, and storage from physical to software-defined virtual, container, and clouds. The role of data infrastructures is to enable and support information technology (IT) and organizational information applications.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

Everything is not the same in business, organizations, IT, and in particular servers, storage, and I/O. This means that there are different audiences who will benefit from reading this book. Because everything and everybody is not the same when it comes to server and storage I/O along with associated IT environments and applications, different readers may want to focus on various sections or chapters of this book.

If you are looking to expand your knowledge into an adjacent area or to understand whats under the hood, from converged, hyper-converged to traditional data infrastructures topics, this book is for you. For experienced storage, server, and networking professionals, this book connects the dots as well as provides coverage of virtualization, cloud, and other convergence themes and topics.

This book is also for those who are new or need to learn more about data infrastructure, server, storage, I/O networking, hardware, software, and services. Another audience for this book is experienced IT professionals who are now responsible for or working with data infrastructure components, technologies, tools, and techniques.

Learn more here about Software Defined Data Infrastructure (SDDI) Essentials book along with cloud, converged, and virtual fundamental server storage I/O tradecraft topics, order your copy from Amazon.com or CRC Press here, and thank you in advance for learning more about SDDI and related topics.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Backup, Big data, Big Data Protection, CMG & More with Tom Becchetti Podcast

server storage I/O trends

In this Server StorageIO podcast episode, I am joined by Tom Becchetti (@tbecchetti) for a Friday afternoon conversation recorded live at Meisters in Scandia Minnesota (thanks to the Meisters crew!).

Tom Becchetti

For those of you who may not know Tom, he has been in the IT, data center, data infrastructure, server and storage (as well as data protection) industry for many years (ok decades) as a customer and vendor in various roles. Not surprising our data infrastructure discussion involves server, software, storage, big data, backup, data protection, big data protection, CMG (Computer Measurement Group @mspcmg), copy data management, cloud, containers, fundamental tradecraft skills among other related topics.

Check out Tom on twitter @tbecchetti and @mspcmg as well as his new website www.storagegodfather.com. Listen to the podcast discussion here (42 minutes) as well as on iTunes.

Also available on 

Ok, nuff said for now…

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book Software-Defined Data Infrastructure Essentials (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

>

Updated Software Defined Data Infrastructure Webinars and Fall 2016 events

Software Defined Data Infrastructure Webinars and Fall 2016 events

server storage I/O trends

Here is the updated Server StorageIO fall 2016 webinar and event activities covering software defined data center, data infrastructure, virtual, cloud, containers, converged, hyper-converged server, storage, I/O network, performance and data protection among other topics.

December 7, 2016 – Webinar 11AM PT – BrightTalk
Hyper-Converged Infrastructure Decision Making

Hyper-Converged Infrastructure, HCI and CI Decision Making

Are Converged Infrastructures (CI), Hyper-Converged Infrastructures (HCI), Cluster in Box or Cloud in Box (CiB) solutions for you? The answer is it depends on what your needs, requirements, application among other criteria are. In addition are you focused on a particular technology solution or architecture approach, or, looking for something that adapts to your needs? Join us in this discussion exploring your options for different scenarios as we look beyond they hype including to next wave of hyper-scale converged along with applicable decision-making criteria. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– What are your application and environment needs along with other objectives
– Explore various approaches for hyper-small and hyper-large environments
– What are you converging, hardware, hypervisors, management or something else?
– Does HCI mean hyper-vendor-lock-in, if so, is that a bad thing?
– When, where, why and how to use different scenarios

November 29-30, 2016 (New) – Converged & Hyper-Converged Decision Making
Is Converged Infrastructure Right For You?
Workshop Seminar – Nijkerk The Netherlands

Converged and server storage I/O data infrastructure trends
Agenda and topics to be covered include:

  • When should decide to evaluate CI/HCI vs. traditional approach
  • What are decision and evaluation criteria for apples to apples vs. Apples to pears
  • What are the costs, benefits, and caveats of the different approaches
  • How different applications such as VDI or VSI or database have different needs
  • What are the network, storage, software license and training cost implications
  • Different comparison criteria for smaller environments remote office vs. Larger enterprise
  • How will you protect and secure a CI, HCI environment (HA, BC, BR, DR, Backup)
  • What is the risk and benefit of startups, companies with limited portfolios vs. Big vendors
  • Do it yourself (DiY) vs. Turnkey software vs. Bundled tin wrapped software solution
  • We will also look at associated trends including software-defined, NVM/SSD, NVMe, VMware, Microsoft, KVM, Citrix/Xen, Docker, OpenStack among others.

Organized by:
Brouwer Storage Consultancy

November 28, 2016 (New) – Server Storage I/O Fundamental Trends V2.1116
Whats New, Whats the buzz, what you need to know about and whos doing what
Workshop Seminar – Nijkerk The Netherlands

Converged and server storage I/O data infrastructure trends
Agenda and topics that will be covered include:

  • Who’s doing what, who are the new emerging vendors, solutions and technologies to watch
  • Non-Volatile Memory (NVM), flash solid state device (SSD), Storage Class Memory (SCM)
  • Networking with your servers and storage including NVMe, NVMeoF and RoCE
  • Cloud, Object and Bulk storage for data protection, archiving, near-line, scale-out
  • Data protection and software defined storage management (backup, BC, BR, DR, archive)
  • Microsoft Windows Server 2016, Nano, S2D and Hyper-V
  • VMware, OpenStack, Ceph, Docker and Containers, CI and HCI
  • EMC is gone, now there is Dell EMC and what that means
  • Various vendors and solutions from legacy to new and emerging
  • Recommendations, usage or deployment scenarios and tips
  • Some examples of who’s doing what includes AWS, Brocade, Cisco, Dell EMC, Enmotus, Futjistu, Google, HDS, HP and Huawei, IBM, Intel, Lenovo, Mellanox, Micron, Microsoft, NetApp, Nutanix, Oracle, Pure, Quantum, Qumulo, Reduxio, Rubrik, Samsung, SANdisk, Seagate, Simplivity and Tintri, Veeam, Veritas, VMware and WD among others.

Organized by:
Brouwer Storage Consultancy

November 23, 2016 – Webinar 10AM PT BrightTalk
BCDR and Cloud Backup Software Defined Data Infrastructures (SDDI) and Data Protection

BC DR Cloud Backup and Data Protection

The answer is BCDR and Cloud Backup, however what was the question? Besides how to protect preserve and secure your data, applications along with data Infrastructures against various threat risk issues, what are some other common questions? For example how to modernize, rethink, re-architect, use new and old things in new ways, these and other topics, techniques, trends, tools have a common theme of BCDR and Cloud Backup. Join us in this discussion exploring your options for protecting data, applications and your data Infrastructures spanning legacy, software-defined virtual and cloud environments. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– Various cloud storage options to meet different application PACE needs
– Do clouds need to be backed-up or protected?
– How to leverage clouds for various data protection objectives
– When, where, why and how to use different scenarios

November 23, 2016 – Webinar 9AM PT – BrightTalk
Cloud Storage – Hybrid and Software Defined Data Infrastructures (SDDI)

Cloud Storage Decision Making

You have been told, or determined that you need (or want) to use cloud storage, ok, now what? What type of cloud storage do you need or want, or do you simply want cloud storage? However, what are your options as well as application requirements including Performance, Availability, Capacity and Economics (PACE) along with access or interfaces? Where are your applications and where will they be located? What are your objectives for using cloud storage or is it simply you have heard or told its cheaper. Join us in this discussion exploring your options, considerations for cloud storage decision-making. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– Various cloud storage options to meet different application PACE needs
– Storage for primary, secondary, performance, availability, capacity, backup, archiving
– Public, private and hybrid cloud storage options from block, file, object to application service
– When, where, why and how to use cloud storage for different scenarios

November 22, 2016 – Webinar 10AM PT – BrightTalk
Cloud Infrastructure Hybrid and Software Defined Data Infrastructures (SDDI)

Cloud Infrastructure and Hybrid Software Defined

At the core of cloud (public, private, hybrid) next generation data centers are software defined data infrastructures that exist to protect, preserve and serve applications, data along with their resulting information services. Software defined data infrastructure core components include hardware, software servers and storage configured (defined) to provide various services enabling application Performance Availability Capacity and Economics (PACE). Just as there are different types of environments, applications along with workloads various options, technologies as well as techniques exist for cloud services (and underlying data infrastructures). Join us in this session to discuss trends, technologies, tools, techniques and services options for cloud infrastructures. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– Software Defined Infrastructures (SDDI) are what enable Software Defined Data Centers and clouds
– Various types of clouds along with cloud services that determine how resources get defined
– When, where, why and how to use cloud Infrastructures along with associated resources

November 15, 2016 (New) – 11AM PT Webinar – Redmond Magazine and Solarwinds
The O.A.R. of Virtualization Scaling
A journey of optimization, automation, and reporting

Your journey to a flexible, scalable and secure IT universe begins now. Join Microsoft MVP and VMware vSAN and vExpert Greg Schulz of Server StorageIO along with VMware vExpert, Cisco Champion and Head Geek of Virtualization and Cloud Practice Kong Yang of SolarWinds for an interactive discussion empowering you to become the master of your software defined and virtual data center. Topics will include:

  • Trust your instruments and automation, however, verify they are working properl
  • Insight into how your environment, as well as automation tools, are working
  • Leverage automation to handle recurring tasks so you can focus on more productive activities
  • Capture, retain and transfer knowledge and tradecraft experiences into automation policies
  • Automated system management is only as good as the policies and data they rely upon
  • Optimize via automation that relies on reporting for insight, awareness and analytics 

November 3, 2016 (New) – Webinar 11AM PT – Redmond Magazine and
Dell Software
Tailor Your Backup Data Repositories to
Fit Your Security and Management Needs

Does data protection storage have you working overtime to take care of it? Do you have the flexibility to protect, preserve, secure and serve different workgroups or customers in a shared environment? Is your environment looking to expand with new applications and remote offices, yet your data protection is slowing you down? 

In this webinar we will look at current and emerging trends along with issues including how different threat risk challenges impact your evolving environment, as well as opportunities to address them. It’s time to deploy technology that works for you and your environment instead of you working for the solution. 

Attend and learn about:

  • Data protection trends, issues, regulatory compliance, challenges and opportunities
  • How to utilize purpose built appliances to protect and defend your systems, applications and data from various threat risks
  • Importance of timely insight and situational awareness into your data protection infrastructure
  • Protecting centralized and distributed remote office branch offices (ROBO) workgroups
  • What you can do today to optimize your environment

October 27, 2016 (New) – Webinar 10AM PT – Virtual Instruments
The Value of Infrastructure Insight

This webinar looks at the value of data center infrastructure insight both as a technology as well as a business productivity enabler. Besides productivity, having insight into how data infrastructure resources (servers, storage, networks, system software) are used, enables informed analysis, troubleshooting, planning, forecasting as well as cost-effective decision-making. In other words, data center infrastructure insight, based on infrastructure performance analytics, enables you to avoid flying blind, having situational awareness for proactive Information Technology (IT) management. Your return on innovation is increased, and leveraging insight awareness along with metrics that matter drives return on investment (ROI) along with enhanced service delivery.

October 20, 2016 – Webinar 9AM PT – BrightTalk
Next-Gen Data Centers Software Defined Data Infrastructures (SDDI) including Servers, Storage and Virtualization

Cloud Storage Decision Making

At the core of next generation data centers are software defined data infrastructures that enable, protect, preserve and serve applications, data along with their resulting information services. Software defined data infrastructure core components include hardware, software servers and storage configured (defined) to provide various services enabling application Performance Availability Capacity and Economics (PACE). Just as there are different types of environments, applications along with workloads various options, technologies as well as techniques exist for virtual servers and storage. Join us in this session to discuss trends, technologies, tools, techniques and services around storage and virtualization for today, tomorrow, and in the years to come. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– Software Defined Infrastructures (SDDI) are what enable Software Defined Data Centers
– Server and Storage Virtualization better together, with and without CI/HCI
– Many different facets (types) of Server virtualization and virtual storage
– When, where, why and how to use storage virtualization and virtual storage

September 20, 2016 – Webinar 8AM PT – BrightTalk
Software Defined Data Infrastructures (SDDI) Enabling Software Defined Data Centers – Part of Software-Defined Storage summit

Cloud Storage Decision Making

Data Infrastructures exist to support applications and their underlying resource needs. Software-Defined Infrastructures (SDI) are what enable Software-Defined Data Centers, and at the heart of a SDI is storage that is software-defined. This spans cloud, virtual and physical storage and is at the focal point of today. Join us in this session to discuss trends, technologies, tools, techniques and services around SDI and SDDC- today, tomorrow, and in the years to come.

September 13, 2016 – Webinar 11AM PT – Redmond Magazine and
Dell Software
Windows Server 2016 and Active Directory
Whats New and How to Plan for Migration

Windows Server 2016 is expected to GA this fall and is a modernized version of the Microsoft operating system that includes new capabilities such as Active Directory (AD) enhancements. AD is critical to organizational operations providing control and secure access to data, networks, servers, storage and more from physical, virtual and cloud (public and hybrid). But over time, organizations along with their associated IT infrastructures have evolved due to mergers, acquisitions, restructuring and general growth. As a result, yesterday’s AD deployments may look like they did in the past while using new technology (e.g. in old ways). Now is the time to start planning for how you will optimize your AD environment using new tools and technologies such as those in Windows Server 2016 and AD in new ways. Optimizing AD means having a new design, performing cleanup and restructuring prior to migration vs. simply moving what you have. Join us for this interactive webinar to begin planning your journey to Windows Server 2016 and a new optimized AD deployment that is flexible, scalable and elastic, and enables resilient infrastructures. You will learn:

  • What’s new in Windows Server 2016 and how it impacts your AD
  • Why an optimized AD is critical for IT environments moving forward
  • How to gain insight into your current AD environment
  • AD restructuring planning considerations

September 8, 2016 – Webinar 11AM PT (Watch on Demand) – Redmond Magazine, Acronis and Unitrends
Data Protection for Modern Microsoft Environments

Your organization’s business depends on modern Microsoft® environments — Microsoft Azure and new versions of Windows Server 2016, Microsoft Hyper-V with RCT, and business applications — and you need a data protection solution that keeps pace with Microsoft technologies. If you lose mission-critical data, it can cost you $100,000 or more for a single hour of downtime. Join our webinar and learn how different data protection solutions can protect your Microsoft environment, whether you store data on company premises, at remote locations, in private and public clouds, and on mobile devices.

Where To Learn More

What This All Means

Its fall back to school and learning time, join me on these and other upcoming event activities.

Ok, nuff said, for now…

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) and twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Server and Storage I/O Benchmarking 101 for Smarties

Server Storage I/O Benchmarking 101 for Smarties or dummies ;)

server storage I/O trends

This is the first of a series of posts and links to resources on server storage I/O performance and benchmarking (view more and follow-up posts here).

The best I/O is the I/O that you do not have to do, the second best is the one with the least impact as well as low overhead.

server storage I/O performance

Drew Robb (@robbdrew) has a Data Storage Benchmarking Guide article over at Enterprise Storage Forum that provides a good framework and summary quick guide to server storage I/O benchmarking.

Via Drew:

Data storage benchmarking can be quite esoteric in that vast complexity awaits anyone attempting to get to the heart of a particular benchmark.

Case in point: The Storage Networking Industry Association (SNIA) has developed the Emerald benchmark to measure power consumption. This invaluable benchmark has a vast amount of supporting literature. That so much could be written about one benchmark test tells you just how technical a subject this is. And in SNIA’s defense, it is creating a Quick Reference Guide for Emerald (coming soon).

But rather than getting into the nitty-gritty nuances of the tests, the purpose of this article is to provide a high-level overview of a few basic storage benchmarks, what value they might have and where you can find out more. 

Read more here including some of my comments, tips and recommendations.

Drew’s provides a good summary and overview in his article which is a great opener for this first post in a series on server storage I/O benchmarking and related resources.

You can think of this series (along with Drew’s article) as server storage I/O benchmarking fundamentals (e.g. 101) for smarties (e.g. non-dummies ;) ).

Note that even if you are not a server, storage or I/O expert, you can still be considered a smarty vs. a dummy if you found the need or interest to read as well as learn more about benchmarking, metrics that matter, tools, technology and related topics.

Server and Storage I/O benchmarking 101

There are different reasons for benchmarking, such as, you might be asked or want to know how many IOPs per disk, Solid State Device (SSD), device or storage system such as for a 15K RPM (revolutions per minute) 146GB SAS Hard Disk Drive (HDD). Sure you can go to a manufactures website and look at the speeds and feeds (technical performance numbers) however are those metrics applicable to your environments applications or workload?

You might get higher IOPs with smaller IO size on sequential reads vs. random writes which will also depend on what the HDD is attached to. For example are you going to attach the HDD to a storage system or appliance with RAID and caching? Are you going to attach the HDD to a PCIe RAID card or will it be part of a server or storage system. Or are you simply going to put the HDD into a server or workstation and use as a drive without any RAID or performance acceleration.

What this all means is understanding what it is that you want to benchmark test to learn what the system, solution, service or specific device can do under different workload conditions.

Some benchmark and related topics include

  • What are you trying to benchmark
  • Why do you need to benchmark something
  • What are some server storage I/O benchmark tools
  • What is the best benchmark tool
  • What to benchmark, how to use tools
  • What are the metrics that matter
  • What is benchmark context why does it matter
  • What are marketing hero benchmark results
  • What to do with your benchmark results
  • server storage I/O benchmark step test
    Example of a step test results with various workers and workload

  • What do the various metrics mean (can we get a side of context with them metrics?)
  • Why look at server CPU if doing storage and I/O networking tests
  • Where and how to profile your application workloads
  • What about physical vs. virtual vs. cloud and software defined benchmarking
  • How to benchmark block DAS or SAN, file NAS, object, cloud, databases and other things
  • Avoiding common benchmark mistakes
  • Tips, recommendations, things to watch out for
  • What to do next

server storage I/O trends

Where to learn more

The following are related links to read more about server (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.

Drew Robb’s benchmarking quick reference guide
Server storage I/O benchmarking tools, technologies and techniques resource page
Server and Storage I/O Benchmarking 101 for Smarties.
Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)
I/O, I/O how well do you know about good or bad server and storage I/Os?
Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)

Wrap up and summary

We have just scratched the surface when it comes to benchmarking cloud, virtual and physical server storage I/O and networking hardware, software along with associated tools, techniques and technologies. However hopefully this and the links for more reading mentioned above give a basis for connecting the dots of what you already know or enable learning more about workloads, synthetic generation and real-world workloads, benchmarks and associated topics. Needless to say there are many more things that we will cover in future posts (e.g. keep an eye on and bookmark the server storage I/O benchmark tools and resources page here).

Ok, nuff said, for now…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

I/O, I/O how well do you know good bad ugly server storage I/O iops?

How well do you know good bad ugly I/O iops?

server storage i/o iops activity data infrastructure trends

Updated 2/10/2018

There are many different types of server storage I/O iops associated with various environments, applications and workloads. Some I/Os activity are iops, others are transactions per second (TPS), files or messages per time (hour, minute, second), gets, puts or other operations. The best IO is one you do not have to do.

What about all the cloud, virtual, software defined and legacy based application that still need to do I/O?

If no IO operation is the best IO, then the second best IO is the one that can be done as close to the application and processor as possible with the best locality of reference.

Also keep in mind that aggregation (e.g. consolidation) can cause aggravation (server storage I/O performance bottlenecks).

aggregation causes aggravation
Example of aggregation (consolidation) causing aggravation (server storage i/o blender bottlenecks)

And the third best?

It’s the one that can be done in less time or at least cost or effect to the requesting application, which means moving further down the memory and storage stack.

solving server storage i/o blender and other bottlenecks
Leveraging flash SSD and cache technologies to find and fix server storage I/O bottlenecks

On the other hand, any IOP regardless of if for block, file or object storage that involves some context is better than those without, particular involving metrics that matter (here, here and here [webinar] )

Server Storage I/O optimization and effectiveness

The problem with IO’s is that they are a basic operations to get data into and out of a computer or processor, so there’s no way to avoid all of them, unless you have a very large budget. Even if you have a large budget that can afford an all flash SSD solution, you may still meet bottlenecks or other barriers.

IO’s require CPU or processor time and memory to set up and then process the results as well as IO and networking resources to move data too their destination or retrieve them from where they are stored. While IO’s cannot be eliminated, their impact can be greatly improved or optimized by, among other techniques, doing fewer of them via caching and by grouping reads or writes (pre-fetch, write-behind).

server storage I/O STI and SUT

Think of it this way: Instead of going on multiple errands, sometimes you can group multiple destinations together making for a shorter, more efficient trip. However, that optimization may also mean your drive will take longer. So, sometimes it makes sense to go on a couple of quick, short, low-latency trips instead of one larger one that takes half a day even as it accomplishes many tasks. Of course, how far you have to go on those trips (i.e., their locality) makes a difference about how many you can do in a given amount of time.

Locality of reference (or proximity)

What is locality of reference?

This refers to how close (i.e., its place) data exists to where it is needed (being referenced) for use. For example, the best locality of reference in a computer would be registers in the processor core, ready to be acted on immediately. This would be followed by levels 1, 2, and 3 (L1, L2, and L3) onboard caches, followed by main memory, or DRAM. After that comes solid-state memory typically NAND flash either on PCIe cards or accessible on a direct attached storage (DAS), SAN, or NAS device. 

server storage I/O locality of reference

Even though a PCIe NAND flash card is close to the processor, there still remains the overhead of traversing the PCIe bus and associated drivers. To help offset that impact, PCIe cards use DRAM as cache or buffers for data along with meta or control information to further optimize and improve locality of reference. In other words, this information is used to help with cache hits, cache use, and cache effectiveness vs. simply boosting cache use.

SSD to the rescue?

What can you do the cut the impact of IO’s?

There are many steps one can take, starting with establishing baseline performance and availability metrics.

The metrics that matter include IOP’s, latency, bandwidth, and availability. Then, leverage metrics to gain insight into your application’s performance.

Understand that IO’s are a fact of applications doing work (storing, retrieving, managing data) no matter whether systems are virtual, physical, or running up in the cloud. But it’s important to understand just what a bad IO is, along with its impact on performance. Try to identify those that are bad, and then find and fix the problem, either with software, application, or database changes. Perhaps you need to throw more software caching tools, hypervisors, or hardware at the problem. Hardware may include faster processors with more DRAM and faster internal busses.

Leveraging local PCIe flash SSD cards for caching or as targets is another option.

You may want to use storage systems or appliances that rely on intelligent caching and storage optimization capabilities to help with performance, availability, and capacity.

Where to gain insight into your server storage I/O environment

There are many tools that you can be used to gain insight into your server storage I/O environment across cloud, virtual, software defined and legacy as well as from different layers (e.g. applications, database, file systems, operating systems, hypervisors, server, storage, I/O networking). Many applications along with databases have either built-in or optional tools from their provider, third-party, or via other sources that can give information about work activity being done. Likewise there are tools to dig down deeper into the various data information infrastructure to see what is happening at the various layers as shown in the following figures.

application storage I/O performance
Gaining application and operating system level performance insight via different tools

windows and linux storage I/O performance
Insight and awareness via operating system tools on Windows and Linux

In the above example, Spotlight on Windows (SoW) which you can download for free from Dell here along with Ubuntu utilities are shown, You could also use other tools to look at server storage I/O performance including Windows Perfmon among others.

vmware server storage I/O
Hypervisor performance using VMware ESXi / vsphere built-in tools

vmware server storage I/O performance
Using Visual ESXtop to dig deeper into virtual server storage I/O performance

vmware server storage i/o cache
Gaining insight into virtual server storage I/O cache performance

Wrap up and summary

There are many approaches to address (e.g. find and fix) vs. simply move or mask data center and server storage I/O bottlenecks. Having insight and awareness into how your environment along with applications is important to know to focus resources. Also keep in mind that a bit of flash SSD or DRAM cache in the applicable place can go along way while a lot of cache will also cost you cash. Even if you cant eliminate I/Os, look for ways to decrease their impact on your applications and systems.

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

>Keep in mind: SSD including flash and DRAM among others are in your future, the question is where, when, with what, how much and whose technology or packaging.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Data Center Infrastructure Management (DCIM) and IRM

StorageIO industry trends cloud, virtualization and big data

There are many business drivers and technology reasons for adopting data center infrastructure management (DCIM) and infrastructure Resource Management (IRM) techniques, tools and best practices. Today’s agile data centers need updated management systems, tools, and best practices that allow organizations to plan, run at a low-cost, and analyze for workflow improvement. After all, there is no such thing as an information recession driving the need to move process and store more data. With budget and other constraints, organizations need to be able to stretch available resources further while reducing costs including for physical space and energy consumption.

The business value proposition of DCIM and IRM includes:

DCIM, Data Center, Cloud and storage management figure

Data Center Infrastructure Management or DCIM also known as IRM has as their names describe a focus around management resources in the data center or information factory. IT resources include physical floor and cabinet space, power and cooling, networks and cabling, physical (and virtual) servers and storage, other hardware and software management tools. For some organizations, DCIM will have a more facilities oriented view focusing on physical floor space, power and cooling. Other organizations will have a converged view crossing hardware, software, facilities along with how those are used to effectively deliver information services in a cost-effective way.

Common to all DCIM and IRM practices are metrics and measurements along with other related information of available resources for gaining situational awareness. Situational awareness enables visibility into what resources exist, how they are configured and being used, by what applications, their performance, availability, capacity and economic effectiveness (PACE) to deliver a given level of service. In other words, DCIM enabled with metrics and measurements that matter allow you to avoid flying blind to make prompt and effective decisions.

DCIM, Data Center and Cloud Metrics Figure

DCIM comprises the following:

  • Facilities, power (primary and standby, distribution), cooling, floor space
  • Resource planning, management, asset and resource tracking
  • Hardware (servers, storage, networking)
  • Software (virtualization, operating systems, applications, tools)
  • People, processes, policies and best practices for management operations
  • Metrics and measurements for analytics and insight (situational awareness)

The evolving DCIM model is around elasticity, multi-tenant, scalability, flexibility, and is metered and service-oriented. Service-oriented, means a combination of being able to rapidly give new services while keeping customer experience and satisfaction in mind. Also part of being focused on the customer is to enable organizations to be competitive with outside service offerings while focusing on being more productive and economic efficient.

DCIM, Data Center and Cloud E2E management figure

While specific technology domain areas or groups may be focused on their respective areas, interdependencies across IT resource areas are a matter of fact for efficient virtual data centers. For example, provisioning a virtual server relies on configuration and security of the virtual environment, physical servers, storage and networks along with associated software and facility related resources.

You can read more about DCIM, ITSM and IRM in this white paper that I did, as well as in my books Cloud and Virtual Data Storage Networking (CRC Press) and The Green and Virtual Data Center (CRC Press).

Ok, nuff said, for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Dell is buying Quest software, not the phone company Qwest

Dell Storage Customer Advisory Panel (CAP)

For those not familiar with Quest, they are a software company not to be confused with the telephone communications company formerly known as Qwest (aka now known as centurylink).

Both Dell and Quest have been on software related acquisition initiatives that past few years with Quest having purchased vKernel, Vizoncore (vRanger virtualization backup), BakBone (who had acquire Alavarii and Asempra) for traditional backup and data protection among others. Not to be out done, as well as purchasing Quest, Dell has also more recently bought Appassure (Disclosure: StorageIOblog site sponsor) for data protection, Sonicwall and Wyse in addition to some other recent purchases (ASAP, Boomi, Compellent, Exanet, EqualLogic, Force10, InsightOne, KACE, Ocarina, Perot, RNA and Scalent among others).

What does this mean?
Dell is expanding the scope of their business with more products (hardware, software), solution bundles, services and channel partnering opportunities Some of the software tools and focus areas that Quest brings to the Dell table or portfolio include:

Database management (Oracle, SQLserver)
Data protection (virtual and physical backup, replication, bc, dr)
Performance monitoring (DCIM and IRM) of applications and infrastructure
User workspace management (application delivery)
Windows server management (migrate and manage, AD, exchange, sharepoint)
Identify and access management (security, compliance, privacy)

What does Dell get by spending over $2B USD on quest?

  • Additional software titles or product
  • More software developers for their Software group
  • Sales people to help promote, partner and sell software solutions
  • Create demand pull for other Dell products and services via software
  • Increase its partner reach via existing Quest VARs and business partners
  • Extend the size of the Dell software and intellectual property (IP) portfolio
  • New revenue streams that compliment existing products and lines of business
  • Potential for better rate of return on some of its $12B USD in cash or equivalence

    Is this a good move for Dell?
    Yes for the above reasons

  • Is there a warning to this for Dell?
    Yes, they need to execute, keep the Quest team focused along with their other teams on the respective partners, products and market opportunities while expanding into new areas. Dell needs to also leverage Quest to further its cause in creating trust, confidence and strategic relationships with channel partners to reach new markets in different geographies. In addition, Dell needs to articulate its strategy and positioning of the various solutions to avoid products being perceived as competing vs. complimenting each other.

    Additional Dell related links:
    Dell Storage Customer Advisory Panel (CAP)
    Dell Storage Forum 2011 revisited
    Dude, is Dell doing a disk deal again with Compellent?
    Data footprint reduction (Part 2): Dell, IBM, Ocarina and Storwize
    Post Holiday IT Shopping Bargains, Dell Buying Exanet?
    Dell Will Buy Someone, However Not Brocade (At least for now)

    Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Measuring Windows performance impact for VDI planning

    Here is a link to a recent guest post that I was invited to do over at The Virtualization Practice (TVP) pertaining to measuring the impact of Windows Boot performance and what that means for planning for Virtual Desktop Infrastructure (VDI) initiatives.

    With Virtual Desktop Infrastructures (VDI) initiatives adoption being a popular theme associated with cloud and dynamic infrastructure environments a related discussion point is the impact on networks, servers and storage during boot or startup activity to avoid bottlenecks. VDI solution vendors include Citrix, Microsoft and VMware along with various server, storage, networking and management tools vendors.

    A common storage and network related topic involving VDI are boot storms when many workstations or desktops all startup at the same time. However any discussion around VDI and its impact on networks, servers and storage should also be expanded from read centric boots to write intensive shutdown or maintenance activity as well.

    Having an understanding of what your performance requirements are is important to adequately design a configuration that will meet your Quality of Service (QoS) and service level objectives (SLOs) for VDI deployment in addition to knowing what to look for in candidate server, storage and networking technologies. For example, knowing how your different desktop applications and workloads perform on a normal basis provides a baseline to compare with during busy periods or times of trouble. Another benefit is that when shopping for example storage systems and reviewing various benchmarks, knowing what your actual performance and application characteristics are helps to align the applicable technology to your QoS and SLO needs while avoiding apples to oranges benchmark comparisons.

    Check out the entire piece including some test results using the hIOmon tool from hyperIO to gather actual workstation performance numbers.

    Keep in mind that the best benchmark is your actual applications running as close to possible to their typical workload and usage scenarios.

    Also keep in mind that fast workstations need fast networks, fast servers and fast storage.

    Ok, nuff said for now.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

    End to End (E2E) Systems Resource Analysis (SRA) for Cloud and Virtual Environments

    A new StorageIO Industry Trends and Perspective (ITP) white paper titled “End to End (E2E) Systems Resource Analysis (SRA) for Cloud, Virtual and Abstracted Environments” is now available at www.storageio.com/reports compliments of SANpulse technologies.

    End to End (E2E) Systems Resource Analysis (SRA) for Virtual, Cloud and abstracted environments: Importance of Situational Awareness for Virtual and Abstracted Environments

    Abstract:
    Many organizations are in the planning phase or already executing initiatives moving their IT applications and data to abstracted, cloud (public or private) virtualized or other forms of efficient, effective dynamic operating environments. Others are in the process of exploring where, when, why and how to use various forms of abstraction techniques and technologies to address various issues. Issues include opportunities to leverage virtualization and abstraction techniques that enable IT agility, flexibility, resiliency and salability in a cost effective yet productive manner.

    An important need when moving to a cloud or virtualized dynamic environment is to have situational awareness of IT resources. This means having insight into how IT resources are being deployed to support business applications and to meet service objectives in a cost effective manner.

    Awareness of IT resource usage provides insight necessary for both tactical and strategic planning as well as decision making. Effective management requires insight into not only what resources are at hand but also how they are being used to decide where different applications and data should be placed to effectively meet business requirements.

    Learn more about the importance and opportunities associated with gaining situational awareness using E2E SRA for virtual, cloud and abstracted environments in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of SANpulse technologies by clicking here.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Spring 2010 StorageIO Newsletter

    Welcome to the spring 2010 edition of the Server and StorageIO (StorageIO) news letter.

    This edition follows the inaugural issue (Winter 2010) incorporating feedback and suggestions as well as building on the fantastic responses received from recipients.

    A couple of enhancements included in this issue (marked as New!) include a Featured Related Site along with Some Interesting Industry Links. Another enhancement based on feedback is to include additional comment that in upcoming issues will expand to include a column article along with industry trends and perspectives.

    StorageIO News Letter Image
    Spring 2010 Newsletter

    You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the spring 2010 newsletter as HTML or PDF or, to go to the newsletter page.

    Follow via Goggle Feedburner here or via email subscription here.

    You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

    Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

    Also, a very big thank you to everyone who has helped make StorageIO a success!.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    March Metrics and Measuring Social Media

    What metrics matter for social media and networking?

    Of course the answer should be it depends.

         

    For example, would that be number of followers or how many posts, tweets or videos you post?

    How about the number of page hits, pages read or unique visitors to a site, perhaps time on site?

    Or, how about the number of times a visitor returns to a site or shares the link or information with others?

    What about click through rates, page impressions, revenue per page and related metrics?

    Maybe the metric is your blog ranking or number of points on your favorite community site such as Storage Monkeys or Wikibon among others?

    Another metrics could be number of comments received particularly if your venue is more interactive for debate or discussion purposes compared to a site with many viewers who prefer to read (lurk). Almost forgot number of LinkedIn contacts or face book friends along with you tube and other videos or pod casts as well as who is on your blog roll.

    Lets not forget how many are following or those being followed along with RSS subscribers as metrics.

    To say that there are many different metrics along with reasons or interests around them would be an understatement to say the least.

    Why do metrics matter in social networking?

    One reason metrics are used (even by those who do not admit it) is to compare status amongst peers or others in your sphere of influence or in adjacent areas.

    Who Are You and Your Influences
    Some spheres of influence and influences

    In additional metrics also matter for those looking to land or obtain advertising sponsors for their sites or perhaps to help gain exposure if looking for a new job or career move. Metrics also matter to gauge the effectiveness or return on investment with social media that could range from how many followers to how far your brands reach extends into other realms and venues.

    In the case of twitter, for some the key metric is number of followers (e.g. popularity) or those being followed with other metrics being number of posts or tweets along with re tweets and list inclusions.For blogs and web sites, incoming links along with site activity among other metrics factor into various ranking sites. Web site activity can be measured in several ways including total hits or visits, pages read and unique visitors among others.

    Having been involved with social media from a blogging along with twitter perspective for a couple of years not to mention being a former server and storage capacity planner I find metrics to be interesting. In addition to the metrics themselves, what is also interesting is how they are used differently for various purposes including gauging cause and effect or return on social networking investment.

    Regardless of your motives or objectives with metrics, here is a quick synopsis of some tools and sites that I have come across that you may already be using, or if not, that you might be interested in.

    What are some metrics?

    If you are interested in your twitter effectiveness, see your report card at tweet grade. Another twitter site that provides a twitter grade based on numerous factors is Twitter Grader while Klout.com characterizes your activity on four different planes similar to a Gartner Magic quadrant. Over at the customer collective they have an example of a more thorough gauge of effectiveness looking at several different metrics some of which are covered here.

    Sample metricsSample Metrics

    Customer Collective Metrics and Rankings

    Similar to Technorati, Tekrati, or other directory and index sites, Wefollow is a popular venue for tracking twitter tweeps based on various has tags for example IT or storage among many others. Tweet level provides a composite ranking determined by influence, popularity, engagement and trust. Talkreview.com provides various metrics of blog and websites including unique visitor traffic estimates while Compete.com shows estimated site visitor traffic with option to compare to others. Interested to see how your website or blog is performing in terms of effectiveness and reach in addition to Compete.com, then check out talkreviews.com or Blog grader that looks at and reports on various blog metrics and information.

    The sites and tools mentioned are far from an exhaustive listing of sites or metrics for various purposes, rather a sampling of what is available to meet different needs. For example there are Alexa, Goggle and Yahoo rankings among many others.

    Wefollow as an example or discussion topic

    One of the things that I find interesting is the diversity in the metrics and rankings for example if you were to say look at wefollow for a particularly category in the top 10 or 20, then use one or more of the other tools to see how the various rankings change.

    A month or so ago I was curious to see if some of the sites could be gamed beyond running up the number of posts, tweets, followers or followings along with re tweets of which some sites appear to be influenced by. As part of determining what metrics matter and which to ignore or keep in the back pocket for when needed, I looked at and experiment with wefollow.

    For those who might have been aware of what I was doing, I went from barely being visible for example in the storage category to jumping into the top 5. Then with some changes, was able to disappear from the top 5 and show up elsewhere and then when all was said and done, return to top rankings.

    Does this mean I put a lot of stock or value in wefollow or simply use it as a gauge and metric along with all of the others? The answer is that it is just that, another metric and tool that can be used for gauging effectiveness and reach, or if you prefer, status or what ever your preference and objective are.

    How did I change my rankings on wefollow? Simple, experimented with using various tags in different combinations, sometimes only one, sometimes many however keeping them relevant and then waiting several days. Im sure if you are inclined and have plenty of time on your hands, someone can figure out or find out how the actual algorithms work, however for me right now, I have other projects to pursue.

    What is the best metric?

    That is going to depends on your objectives or what you are trying to accomplish.

    As with other measurements and metrics, those for social media provide different points of reference from how many followers to amount of influence.

    Depending on your objective, effectiveness may be gauged by number of followers or those being followed, number of posts or the number of times being quoted or referenced by others including in lists.

    In some cases rankings that compare with others are based on those sites knowing about you which may mean having to register so that you can be found.

    Bottom line, metrics matter however what they mean and their importance will vary depending on objectives, preferences or for accomplishing different things.

    One of the interesting things about social networking and media sites is that if you do not like a particularly ranking, list, grade or status then either work to change the influence of those scores, or, come up with your own.

    What is your take on metrics that matter, which is of course unless they do not matter to you?

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    March Metric Madness: Fun with Simple Math

    Its March and besides being spring in north America, it also means tournament season including the NCAA basket ball series among others known as March Madness.

    Given the office pools and other forms of playing with numbers tied to the tournaments and real or virtual money, here is a quick timeout looking at some fun with math.

    The fun is showing how simple math can be used to show relative growth for IT resources such as data storage. For example, say that you have 10Tbytes of storage or data and that it is growing at only 10 percent per year, in five years with simple math yields 14.6Tbytes.

    Now lets assume growth rate is 50 percent per year and in the course of five years, instead of having 10Tbytes, that now jumps to 50.6Tbytes. If you have 100Tbytes today and at 50 percent growth rate, that would yield 506.3 Tbytes or about half of a petabyte in 5 years. If by chance you have say 1Pbyte or 1,000Tbytes today, at 25% year of year growth you would have 2.44Pbytes in 5 years.
    Basic Storage Forecast
    Figure 1 Fun with simple math and projected growth rates

    Granted this is simple math showing basic examples however the point is that depending on your growth rate and amount of either current data or storage, you might be surprised at the forecast or projected needs in only five years.

    In a nutshell, these are examples of very basic primitive capacity forecasts that would vary by other factors including if the data is 10Tbytes and your policies is for 25 percent free space, that would require even more storage than the base amount. Go with a different RAID level, some extra space for replication, snapshots, disk to disk backups and replication not to mention test development and those numbers go up even higher.

    Sure those amounts can be offset with thin provisioning, dedupe, archiving, compression and other forms of data footprint reduction, however the point here is to realize how simple math can portray a very basic forecast and picture of growth.

    Read more about performance and capacity in Chapter 10 – Performance and capacity planning for storage networks – Resilient Storage Networks (Elsevier) as well as at www.cmg.org (Computer Measurement Group)..

    And that is all I have to say about this for now, enjoy March madness and fun with numbers.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Recent tips, videos, articles and more update V2010.1

    Realizing that some prefer blogs to webs to twitter to other venues, here are some recent links to articles, tips, videos, webcasts and other content that have appeared in different venues since August 2009.

  • i365 Guest Interview: Experts Corner: Q&A with Greg Schulz December 2009
  • SearchCIO Midmarket: Remote-location disaster recovery risks and solutions December 2009
  • BizTech Magazine: High Availability: A Delicate Balancing Act November 2009
  • ESJ: What Comprises a Green, Efficient and Effective Virtual Data Center? November 2009
  • SearchSMBStorage: Determining what server to use for SMB November 2009
  • SearchStorage: Performance metrics: Evaluating your data storage efficiency October 2009
  • SearchStorage: Optimizing capacity and performance to reduce data footprint October 2009
  • SearchSMBStorage: How often should I conduct a disaster recovery (DR) test? October 2009
  • SearchStorage: Addressing storage performance bottlenecks in storage September 2009
  • SearchStorage AU: Is tape the right backup medium for smaller businesses? August 2009
  • ITworld: The new green data center: From energy avoidance to energy efficiency August 2009
  • Video and podcasts include:
    December 2009 Video: Green Storage: Metrics and measurement for management insight
    Discussion between Greg Schulz and Mark Lewis of TechTarget the importance of metrics and measurement to gauge productivity and efficiency for Green IT and enabling virtual information factories. Click here to watch the Video.

    December 2009 Podcast: iSCSI SANs can be a good fit for SMB storage
    Discussion between Greg Schulz and Andrew Burton of TechTarget about iSCSI and other related technologies for SMB storage. Click here to listen to the podcast.

    December 2009 Podcast: RAID Data Protection Discussion
    Discussion between Greg Schulz and Andrew Burton of TechTarget about RAID data proteciton, techniques and technologies. Click here to listen to the podcast.

    December 2009 Podcast: Green IT, Effiency and Productivity Discussion
    Discussion between Greg Schulz and Jon Flower of Adaptec about data Green IT, energy effiency, inteligent power management (IPM) also known as MAID 2.0 and other forms of optimization techniques including SSD. Click here to listen to the podcast sponsored by Adaptec.

    November 2009 Podcast: Reducing your data footprint impact
    Even though many enterprise data storage environments are coping with tightened budgets and reduced spending, overall net storage capacity is increasing. In this interview, Greg Schulz, founder and senior analyst at StorageIO Group, discusses how storage managers can reduce their data footprint. Schulz touches on the importance of managing your data footprint on both online and offline storage, as well as the various tools for doing so, including data archiving, thin provisioning and data deduplication. Click here to listen to the podcast.

    October 2009 Podcast: Enterprise data storage technologies rise from the dead
    In this interview, Greg Schulz, founder and senior analyst of the Storage I/O group, classifies popular technologies such as solid-state drives (SSDs), RAID and Fibre Channel (FC) as “zombie” technologies. Why? These are already set to become part of standard storage infrastructures, says Schulz, and are too old to be considered fresh. But while some consider these technologies to be stale, users should expect to see them in their everyday lives. Click here to listen to the podcast.

    Check out the Tips, Tools and White Papers, and News pages for additional commentary, coverage and related content or events.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    How to win approval for upgrades: Link them to business benefits

    Drew Rob has another good article over at Processor.com about various tips and strategies on how to gain approval for hardware (or software) purchases with some comments by yours truly.

    My tips and advice that are quoted in the story include to link technology resources to business needs impact which may be common sense, however still a time tested effective technique.

    Instead of speaking tech talk such as Performance, capacity, availability, IOPS, bandwidth, GHz, frames or packets per second, VMs to PM or dedupe ratio, map them to business speak, that is things that finance, accountants, MBAs or other management personal understand.

    For example, how many transactions at a given response time can be supported by a given type of server, storage or networking device.

    Or, put a different way, with a given device, how much work can be done and what is the associated monetary or business benefit.

    Likewise, if you do not have a capacity plan for servers, storage, I/O and networking along with software and facilities covering performance, availability, capacity and energy demands now is the time to put one in place.

    More on capacity and performance planning later, however for now, if you want to learn more, check Chapter 10 (Performance and Capacity Planning) in my book Resilient Storage Networks: Designing Flexible and Scalable Data Infrastructure: Elsevier).

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved