Spring (May) 2012 StorageIO news letter

StorageIO News Letter Image
Spring (May) 2012 News letter

Welcome to the Spring (May) 2012 edition of the Server and StorageIO Group (StorageIO) news letter. This follows the Fall (December) 2011 edition.

You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

Click on the following links to view the Spring May 2012 edition as an HTML or PDF or, to go to the news letter page to view previous editions.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Nuff said for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

What is the best kind of IO? The one you do not have to do

What is the best kind of IO? The one you do not have to do

data infrastructure server storage I/O trends

Updated 2/10/2018

What is the best kind of IO? If no IO (input/output) operation is the best IO, than the second best IO is the one that can be done as close to the application and processor with best locality of reference. Then the third best IO is the one that can be done in less time, or at least cost or impact to the requesting application which means moving further down the memory and storage stack (figure 1).

Storage and IO or I/O locality of reference and storage hirearchy
Figure 1 memory and storage hierarchy

The problem with IO is that they are basic operation to get data into and out of a computer or processor so they are required; however, they also have an impact on performance, response or wait time (latency). IO require CPU or processor time and memory to set up and then process the results as well as IO and networking resources to move data to their destination or retrieve from where stored. While IOs cannot be eliminated, their impact can be greatly improved or optimized by doing fewer of them via caching, grouped reads or writes (pre-fetch, write behind) among other techniques and technologies.

Think of it this way, instead of going on multiple errands, sometimes you can group multiple destinations together making for a shorter, more efficient trip; however, that optimization may also take longer. Hence sometimes it makes sense to go on a couple of quick, short low latency trips vs. one single larger one that takes half a day however accomplishes many things. Of course, how far you have to go on those trips (e.g. locality) makes a difference of how many you can do in a given amount of time.

What is locality of reference?

Locality of reference refers to how close (e.g location) data exists for where it is needed (being referenced) for use. For example, the best locality of reference in a computer would be registers in the processor core, then level 1 (L1), level 2 (L2) or level 3 (L3) onboard cache, followed by dynamic random access memory (DRAM). Then would come memory also known as storage on PCIe cards such as nand flash solid state device (SSD) or accessible via an adapter on a direct attached storage (DAS), SAN or NAS device. In the case of a PCIe nand flash SSD card, even though physically the nand flash SSD is closer to the processor, there is still the overhead of traversing the PCIe bus and associated drivers. To help offset that impact, PCIe cards use DRAM as cache or buffers for data along with Meta or control information to further optimize and improve locality of reference. In other words, help with cache hits, cache use and cache effectiveness vs. simply boosting cache utilization.

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What can you do the cut the impact of IO

  • Establish baseline performance and availability metrics for comparison
  • Realize that IOs are a fact of IT virtual, physical and cloud life
  • Understand what is a bad IO along with its impact
  • Identify why an IO is bad, expensive or causing an impact
  • Find and fix the problem, either with software, application or database changes
  • Throw more software caching tools, hyper visors or hardware at the problem
  • Hardware includes faster processors with more DRAM and fast internal busses
  • Leveraging local PCIe flash SSD cards for caching or as targets
  • Utilize storage systems or appliances that have intelligent caching and storage optimization capabilities (performance, availability, capacity).
  • Compare changes and improvements to baseline, quantify improvement

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Cloud and Virtual Data Storage Networking book released

Ok, it’s now official, following its debut at the VMworld 2011 book store last week in Las Vegas, my new book Cloud and Virtual Data Storage Networking (CRC Press) is now formally released with general availability announced today along with companion material located at https://storageioblog.com/book3 including the Cloud and Virtual Data Storage Networking LinkedIn group page launched a few months ago. Cloud and Virtual Data Storage Networking (CVDSN) a 370 page hard cover print is my third solo book that follows The Green and Virtual Data Center (CRC Press 2009) and Resilient Storage Networks (Elsevier 2004).

Cloud and Virtual Data Storage Networking Book by Greg Schulz
CVDSN book was on display at VMworld 2011 book store last week along with a new book by Duncan Epping (aka @DuncanYB ) and Frank Denneman (aka @frankdenneman ) titled VMware vSphere 5 Clustering Technical Deepdive. You can get your copy of Duncan and Franks new book on Amazon here.

Greg Schulz during book signing at VMworld 2011
Here is a photo of me on the left visiting a VMworld 2011 attendee in the VMworld book store.

 

Whats inside the book, theme and topics covered

When it comes to clouds, virtualization, converged and dynamic infrastructures Dont be scared however do look before you leap to be be prepared including doing your homework.

What this means is that you should do your homework, prepare, learn, and get involved with proof of concepts (POCs) and training to build the momentum and success to continue an ongoing IT journey. Identify where clouds, virtualization and data storage networking technologies and techniques compliment and enable your journey to efficient, effective and productive optimized IT services delivery.

 

There is no such thing as a data or information recession: Do more with what you have

A common challenge in many organizations is exploding data growth along with associated management tasks and constraints, including budgets, staffing, time, physical facilities, floor space, and power and cooling. IT clouds and dynamic infrastructure environments enable flexible, efficient and optimized, cost-effective and productive services delivery. The amount of data being generated, processed, and stored continues to grow, a trend that does not appear to be changing in the future. Even during the recent economic crisis, there has been no slow down or information recession. Instead, the need to process, move, and store data has only increased, in fact both people and data are living longer. CVDSN presents options, technologies, best practices and strategies for enabling IT organizations looking to do more with what they have while supporting growth along with new services without compromising on cost or QoS delivery (see figure below).

Driving Return on Innovation the new ROI: Doing more, reducing costs while boosting productivity

 

Expanding focus from efficiency and optimization to effectiveness and productivity

A primary tenant of a cloud and virtualized environment is to support growing demand in a cost-effective manner  with increased agility without compromising QoS. By removing complexity and enabling agility, information services can be delivered in a timely manner to meet changing business needs.

 

There are many types of information services delivery model options

Various types of information services delivery modes should be combined to meet various needs and requirements. These complimentary service delivery options and descriptive terms include cloud, virtual and data storage network enabled environments. These include dynamic Infrastructure, Public & Private and Hybrid Cloud, abstracted, multi-tenant, capacity on demand, Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) among others.

Convergence combing different technology domains and skill sets

Components of a cloud and virtual environment include desktop, servers, and storage, networking, hardware, and software, services along with APIs and software stacks. This include virtual and physical desktops, data, voice and storage networks, LANs, SANs, MANs, WANs, faster blade and rack servers with more memory, SSD and high-capacity storage and associated virtualization tools and management software. True convergence combines leveraging technology and people, processes and best practices aligned to make the most of those resources to deliver cost-effective services delivery.

 

Best people, processes, practices and products (the four Ps)

Bringing all the various components together is the Ps (people skill sets, process, practices and products). This means leveraging and enhancing people skill sets and experience, process and procedures to optimize workflow for streamlined service orchestration, practices and policies to be more effectively reducing waste without causing new bottlenecks, and products such as racks, stacks, hardware, software, and managed or cloud services.

 

Service categories and catalogs, templates SLO and SLA alignment

Establishing service categories aligned to known service levels and costs enables resources to be aligned to applicable SLO and SLA requirements. Leveraging service templates and defined policies can enable automation and rapid provisioning of resources including self-service requests.

 

Navigating to effective IT services delivery: Metrics, measurements and E2E management

You cannot effectively manage what you do not know about; likewise, without situational awareness or navigation tools, you are flying blind. E2E (End to End) tools can provide monitoring and usage metrics for reporting and accounting, including enabling comparison with other environments. Metrics include customer service satisfaction, SLO and SLAs, QoS, performance, availability and costs to service delivered.

 

The importance of data protection for virtual, cloud and physical environments

Clouds and virtualization are important tools and technologies for protecting existing consolidated or converged as well as traditional environments. Likewise, virtual and cloud environments or data placed there also need to be protected. Now is the time to rethink and modernize your data protection strategy to be more effective, protecting, preserving and serving more data for longer periods of time with less complexity and cost.

 

Packing smart and effectively for your journey: Data footprint reduction (DFR)

Reducing your data footprint impact leveraging data footprint reduction (DFR) techniques, technologies and best practices is important for enabling an optimized, efficient and effective IT services delivery environment. Reducing your data footprint is enabled with clouds and virtualization providing a means and mechanism for archiving inactive data and for transparently moving it. On the other hand, moving to a cloud and virtualized environment to do more with what you have is enhanced by reducing the impact of your data footprint. The ABCDs of data footprint reduction include Archiving, Backup modernization, Compression and consolidation, Data management and dedupe along with Storage tiering and thin provisioning among other techniques.

Cloud and Virtual Data Storage Networking book by Greg Schulz

How the book is laid out:

  • Table of content (TOC)
  • How the book is organized and who should read it
  • Preface
  • Section I: Why the need for cloud, virtualization and data storage networks
  • Chapter 1: Industry trends and perspectives: From issues and challenges to opportunities
  • Chapter 2: Cloud, virtualization and data storage networking fundamentals
  • Section II: Managing data and resources: Protect, preserve, secure and serve
  • Chapter 3: Infrastructure Resource Management (IRM)
  • Chapter 4: Data and storage networking security
  • Chapter 5: Data protection (Backup/Restore, BC and DR)
  • Chapter 6: Metrics and measurement for situational awareness
  • Section III: Technology, tools and solution options
  • Chapter 7: Data footprint reduction: Enabling cost-effective data demand growth
  • Chapter 8: Enabling data footprint reduction: Storage capacity optimization
  • Chapter 9: Storage services and systems
  • Chapter 10: Server virtualization
  • Chapter 11: Connectivity: Networking with your servers and storage
  • Chapter 12: Cloud and solution packages
  • Chapter 13: Management and tools
  • Section IV: Putting IT all together
  • Chapter 14: Applying what you have learned
  • Chapter 15: Wrap-up, what’s next and book summary
  • Appendices:
  • Where to Learn More
  • Index and Glossary

Here is the release that went out via Business Wire (aka Bizwire) earlier today.

 

Industry Veteran Greg Schulz of StorageIO Reveals Latest IT Strategies in “Cloud and Virtual Data Storage Networking” Book
StorageIO Founder Launches the Definitive Book for Enabling Cloud, Virtualized, Dynamic, and Converged Infrastructures

Stillwater, Minnesota – September 7, 2011  – The Server and StorageIO Group (www.storageio.com), a leading independent IT industry advisory and consultancy firm, in conjunction with  publisher CRC Press, a Taylor and Francis imprint, today announced the release of “Cloud and Virtual Data Storage Networking,” a new book by Greg Schulz, noted author and StorageIO founder. The book examines strategies for the design, implementation, and management of hardware, software, and services technologies that enable the most advanced, dynamic, and flexible cloud and virtual environments.

Cloud and Virtual Data Storage Networking

The book supplies real-world perspectives, tips, recommendations, figures, and diagrams on creating an efficient, flexible and optimized IT service delivery infrastructures to support demand without compromising quality of service (QoS) in a cost-effective manner. “Cloud and Virtual Data Storage Networking” looks at converging IT resources and management technologies to facilitate efficient and effective delivery of information services, including enabling information factories. Schulz guides readers of all experience levels through various technologies and techniques available to them for enabling efficient information services.

Topics covered in the book include:

  • Information services model options and best practices
  • Metrics for efficient E2E IT management and measurement
  • Server, storage, I/O networking, and data center virtualization
  • Converged and cloud storage services (IaaS, PaaS, SaaS)
  • Public, private, and hybrid cloud and managed services
  • Data protection for virtual, cloud, and physical environments
  • Data footprint reduction (archive, backup modernization, compression, dedupe)
  • High availability, business continuance (BC), and disaster recovery (DR)
  • Performance, availability and capacity optimization

This book explains when, where, with what, and how to leverage cloud, virtual, and data storage networking as part of an IT infrastructure today and in the future. “Cloud and Virtual Data Storage Networking” comprehensively covers IT data storage networking infrastructures, including public, private and hybrid cloud, managed services, virtualization, and traditional IT environments.

“With all the chatter in the market about cloud storage and how it can solve all your problems, the industry needed a clear breakdown of the facts and how to use cloud storage effectively. Greg’s latest book does exactly that,” said Greg Brunton of EDS, an HP company.

Click here to listen and watch Schulz discuss his new book in this Video about Cloud and Virtual Data Storage Networking book by Greg Schulz video.

About the Book

Cloud and Virtual Data Storage Networking has 370 pages, with more than 100 figures and tables, 15 chapters plus appendices, as well as a glossary. CRC Press catalog number K12375, ISBN-10: 1439851735, ISBN-13: 9781439851739, publication September 2011. The hard cover book can be purchased now at global venues including Amazon, Barnes and Noble, Digital Guru and CRCPress.com. Companion material is located at https://storageioblog.com/book3 including images, additional information, supporting site links at CRC Press, LinkedIn Cloud and Virtual Data Storage Networking group, and other books by the author. Direct book editorial review inquiries to John Wyzalek of CRC Press at john.wyzalek@taylorfrancis.com (twitter @jwyzalek) or +1 (917) 351-7149. For bulk and special orders contact Chris Manion of CRC Press at chris.manion@taylorandfrancis.com or +1 (561) 998-2508. For custom, derivative works and excerpts, contact StorageIO at info@storageio.com.

About the Author

Greg Schulz is the founder of the independent IT industry advisory firm StorageIO. Before forming StorageIO, Schulz worked for several vendors in systems engineering, sales, and marketing technologist roles. In addition to having been an analyst, vendor and VAR, Schulz also gained real-world hands on experience working in IT organizations across different industry sectors. His IT customer experience spans systems development, systems administrator, disaster recovery consultant, and capacity planner across different technology domains, including servers, storage, I/O networking hardware, software and services. Today, in addition to his analyst and research duties, Schulz is a prolific writer, blogger, and sought-after speaker, sharing his expertise with worldwide technology manufacturers and resellers, IT users, and members of the media. With an insightful and thought-provoking style, Schulz is also author of the books “The Green and Virtual Data Center” (CRC Press, 2009) which is on the Intel developers recommended reading list and the SNIA-endorsed reading book “Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures” (Elsevier, 2004). Schulz is available for interviews and commentary, briefings, speaking engagements at conferences and private events, webinars, video and podcast along with custom advisory consultation sessions. Learn more at https://storageio.com.

End of press release.

Wrap up

I want to express thanks to all of those involved with the project that spanned over the past year.

Stayed tuned for more news and updates pertaining to Cloud and Virtual Data Storage Networking along with related material including upcoming events as well as chapter excerpts. Speaking of events, here is information on an upcoming workshop seminar that I will be involved with for IT storage and networking professionals to be held October 4th and 5th in the Netherlands.

You can get your copy now at global venues including Amazon, Barnes and Noble, Digital Guru and CRCPress.com.

Ok, nuff said, for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

StorageIO going Dutch again: October 2011 Seminar for storage professionals

Greg Schulz of StorageIO in conjunction with or dutch partner Brouwer Storage Consultancy will be presenting a two day workshop seminar for IT storage, virtualization, and networking professionals Monday 3rd and Tuesday 4th of October 2011 at Ampt van Nijkerk Netherlands.

Brouwer Storage ConsultanceyThe Server and StorageIO Group

This two day interactive education seminar for storage professionals will focus on current data and storage networking trends, technology and business challenges along with available technologies and solutions. During the seminar learn what technologies and management techniques are available, how different vendors solutions compare and what to use when and where. This seminar digs into the various IT tools, techniques, technologies and best practices for enabling an efficient, effective, flexible, scalable and resilient data infrastructure.

The format of this two seminar will be a mix of presentation and interactive discussion allowing attendees plenty of time to discuss among themselves and with seminar presenters. Attendees will gain insight into how to compare and contrast various technologies and solutions in addition to identifying and aligning those solutions to their specific issues, challenges and requirements.

Major themes that will be discussed include:

  • Who is doing what with various storage solutions and tools
  • Is RAID still relevant for today and tomorrow
  • Are hard disk drives and tape finally dead at the hands of SSD and clouds
  • What am I routinely hearing, seeing or being asked to comment on
  • Enabling storage optimization, efficiency and effectiveness (performance and capacity)
  • Opportunities for leveraging various technologies, techniques,trends
  • Supporting virtual servers including re-architecting data protection
  • How to modernize data protection (backup/restore, BC, DR, replication, snapshots)
  • Data footprint reduction (DFR) including archive, compression and dedupe
  • Clarifying cloud confusion, don’t be scared, however look before you leap
  • Big data, big bandwidth and virtual desktop infrastructures (VDI)

In addition this two day seminar will look at what are some new and improved technologies and techniques, who is doing what along with discussions around industry and vendor activity including mergers and acquisitions. In addition to seminar handout materials, attendees will also receive a copy Cloud and Virtual Data Storage Networking (CRC Press) by Greg Schulz that looks at enabling efficient, optimized and effective information services delivery across cloud, virtual and traditional environments.

Cloud and Virtual Data Storage Networking Book

Buzzwords and topic themes to be discussed among others include E2E, FCoE and DCB, CNAs, SAS, I/O virtualization, server and storage virtualization, public and private cloud, Dynamic Infrastructures, VDI, RAID and advanced data protection options, SSD, flash, SAN, DAS and NAS, object storage, big data and big bandwidth, backup, BC, DR, application optimized or aware storage, open storage, scale out storage solutions, federated management, metrics and measurements, performance and capacity, data movement and migration, storage tiering, data protection modernization, SRA and SRM, data footprint reduction (archive, compress, dedupe), unified and multi-protocol storage, solution bundle and stacks.

For more information or to register contact Brouwer Storage Consultancy

Brouwer Storage Consultancy
Olevoortseweg 43
3861 MH Nijkerk
The Netherlands
Telephone: +31-33-246-6825
Cell: +31-652-601-309
Fax: +31-33-245-8956
Email: info@brouwerconsultancy.com
Web: www.brouwerconsultancy.com

Brouwer Storage Consultancey

Learn about other events involving Greg Schulz and StorageIO at www.storageio.com/events

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Getting SASy, the other shared storage option for disk and SSD systems

Here is a link to a recent guest post that I was invited to do over at The Virtualization Practice (TVP) pertaining to Getting SASsy, the other shared server to storage interconnect for disk and SSD systems. Serial Attached SCSI (SAS) is better known as an interface for connecting hard disk drives (HDD) to servers and storage systems; however it is also widely used for attaching storage systems to physical as well as virtual servers. An important storage requirement for virtual machine (VM) environments with more than one physical machine (PM) server is shared storage. SAS has become a viable interconnect along with other Storage Area Network (SAN) interfaces including Fibre Channel (FC), Fibre Channel over Ethernet (FCoE) and iSCSI for block access.

Read more here.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Is FCoE Struggling to Gain Traction, or on a normal adoption course?

Here is an article by Drew Robb over at Enterprise Storage Forum about Fibre Channel over Ethernet (FCoE) and its state of adoption. Drews article includes comments and perspectives from myself around where FCoE is going and why it is on a long road and not a sprint for a short temporal technology play (e.g. not a quick passing fad or bandwagon trend).

If you measure FCoE adoption in months, sure, its been slow to gain adoption and deployment similar to how Ethernet, Fibre Channel (FC) and even iSCSI took time to evolve. Part of the time involved is for developing the standards, implementing the technology as well as expanding the capabilities of the new tools. Another part of the time required for technologies that are targeted to be around for a decade or more include ecosystem maturity, education not to mention customers being comfortable with along with having budget to buy the new items.

I have previously said that FCoE was in the trough of disillusionment and depending on your view, that could be either entering, exiting or there to stay. Not surprisingly some cheerleaders thought that saying FCoE was in the trough of disillusionment was being cynical, while some cynics were cheerleading.

My point around FCoE is that any technology or paradigm that goes through a hype cycle that will actually have long term legs or be around for years if not decades goes through a post initial hype disillusionment phase before reappearing. Technologies or trends that go through the trough of disillusionment that will eventually reappear sometimes go to Some Day Isle for rest and relaxation (R and R). Some Day Isle for those not familiar with it is a visional or fictional place that some day you will go to, a wishful happy place so to speak that is perfect for hyperbole R and R. After some R and R, these trends, technologies or techniques often reappear well rested and ready for the next wave of buzz, fud, hype and activity.

Certainly there have been and will continue to be more battles or matches tied to early deployments along with plenty of hype or FUD. After all, if FCoE were to simply pack up and go away like some cynics or naysayers suggest, what will they have to talk, blog, write or speak about? Similarly if FCoE magically goes mainstream tomorrow, the cheerleaders will have to find a new bandwagon or Shiny New Toy (SNT) to rally around.

Also as I have said in the past, its not if, rather when FCoE will be deployed in yours or your customers environment along with how and using what tools or technologies. Another question to pose around FCoE as a converged technology is will you use it in a true converged manner meaning adapting how server, storage and networking resources are managed including best practices? Or, will you use FCoE in a hybrid SAN or LAN mode using traditional SAN and LAN management practice and separate teams perhaps even battling over who owns the tools or technology.

Fwiw, in case you did not pick up on it from my previous posts, tips, articles and coverage in books, I think that FCoE has a very bright future as does NAS and iSCSI along with shared SAS as complimentary technologies when used for the applicable scenario.

What is your take, Is FCoE struggling to gain traction?

Is FCoE on a normal technology evolution path and timeline?

Is it too early to tell what the future holds for FCoE?

Is FCoE too little to late and if so why?

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

NetApp buying LSIs Engenio Storage Business Unit

Storage I/O trends

This has been a busy week as on Monday Western Digital (WD) announced that they were buying the disk drive business from Hitachi Ltd. (e.g. HGST) for about $4.3 billion USD. The deal includes about $3.5B in cash and 25 million WD common shares (e.g. $750M USD) which will give Hitachi Ltd. about ten (10) percent ownership in WD along with adding two Hitachi persons onto the WD board of directors. WD now moves into the number one hard disk drive (HDD) spot above Seagate (note Hitachi is not selling HDS) in addition to giving them a competitive positioning in both the enterprise HDD as well as emerging SSD markets.

Today NetApp announced that they have agreed to purchase portions of the LSI storage business known as Engenio for $480M USD.

The business and technology that LSI is selling to NetApp (aka Engenio) is the external storage system business that accounted for about $705M of their approximate $900M+ storage business in 2010. This piece of the business represents external (outside of the server) shared RAID storage systems that support Serial Attached SCSI (SAS), iSCSI, Fibre Channel (FC) and emerging FCoE (Fibre Channel over Ethernet) with SSD, SAS and FC high performance HDDs as well as high capacity HDDs. NetApp has block however there strong suit (sorry netapp guys) is file while Engenio strong suit is block that attaches to gateways from NetApp as well as others in addition to servers for scale out NAS and cloud.

What NetApp is getting from LSI is the business that sells storage systems or their components to OEMs including Dell, IBM (here and here), Oracle, SGI and TeraData (a former NCR spin off) among others.

What LSI is retaining are their custom storage silicon, ICs, PCI RAID adapter and host bus adapter (HBA) cards including MegaRAID, 3ware along with SAS chips, SAS switches, PCI SSD card and the Onstor NAS product they acquired about a year ago. Other parts of the LSI business which makes chips for storage, networking and communications vendors is also not affected by this deal.

In other words, the sign in front of the Wichita LSI facility that used to say NCR will now probably include a NetApp logo once the deal closes.

For those not familiar, Tom Georgens current CEO of NetApp is very familiar with Engenio and LSI as he used to work there (after leaving a career at EMC). In fact Mr. Georgens was part of the most recent attempt to spin the external storage business out of LSI back in the mid 2000s when it received the Engenio name and branding. In addition to Tom Georgens, Vic Mahadevan the current NetApp Chief Strategy Officer recently worked at LSI and before that at BMC, Compaq and Maxxan among others.

What do I mean by the most recent attempt to spin the storage business out of LSI? Simple, the Engenio storage business traces its lineage back to NCR and what become known as Symbiosis Logic that LSI acquired as part of some other acquisitions.

Going back to the late 90s, there was word on the street that the then LSI management was not sure what to do with storage business as their core business was and still is making high volume chips and related technologies. Current LSI CEO Abhi Talwalkar is a chip guy (nothing wrong with that) who honed his skills at Intel. Thus it should not be a surprise that there is a focus on the LSI core business model of making their own as well as producing silicon (not the implant stuff) for IT and consumer electronics (read their annual report).

As part of the acquisition, LSI has already indicated that they will use all or some of the cash to buy back their stock. However I also wonder if this does not open the door for Abhi and his team to do some other acquisitions more synergic with their core business.

What does NetApp get:

  • Expanded OEM and channel distribution capabilities
  • Block based products to coexist with their NAS gateways
  • Business with an established revenue base
  • Footprint into new or different markets
  • Opportunity to sell different product set to existing customers

NetApp gets an OEM channel distribution model to complement what they already have (mainly IBM) in addition to their mainly direct sales and with VARs. Note that Engenio went to an all OEM/distribution model several years ago maintaining direct touch support for their partners.

Note that NetApp is providing financial guidance that the deal could add $750M to FY12 which is based on retaining some portion of the existing OEM business however moving into new markets as well as increasing product diversity with existing direct customers, vars or channel partners.

NetApp also gets to address storage market fragmentation and enable OEM as well as channel diversification including selling to other server vendors besides IBM. The Engenio model in addition to supporting Dell, IBM, Oracle, SGI and other server vendors also involves working with vertical solution integrator OEMs in the video, entertainment, High Performance Compute (HPC), cloud and MSP markets. This means that NetApp can enter new markets where bandwidth performance is needed including scale out NAS (beyond what NetApp has been doing). This also means that NetApp gets a product to sell into markets where back end storage for big data, bulk storage, media and entertainment, cloud and MSP as well as other applications leverage SAS, iSCSI or FC and FCoE beyond what their current lineup offers. Who sells into those spaces? Dell, HP, IBM, Oracle, SGI and Supermicro among others.

What does LSI get:

  • $480M USD cash and buy back some stock to keep investors happy
  • Streamline their business or open door for new ones
  • Perhaps increase OEM sales to other new or existing customers
  • Perhaps do some acquisitions or be acquired

What does Engenio get:
A new parent that hopefully invest in the technology and marketing of the solution sets as well as leverage or take care of the installed base of customers

What do the combined Engenio and NetApp OEMs and partners get:
With combination of the organizations, hopefully streamlined support, service, and marketing, product enhancements to address new or different needs. Possibly comfort in knowing that Engenio now has a home and its future somewhat known.

What about the Engenio employees?
The reason I bring this up is wondering what happens to those who have many years invested and their LSI stock which I presume they keep hoping that the sale gives them a future return on their investment or efforts. Having been in similar acquisitions in the past, it can be a rough go however if the acquirer has a bright future, than enough said.

Some random thoughts:

Is this one of those industry trendy, sexy, cool everybody drooling type deals with new and upcoming technology and marketing buzz?
No

Is this one of those industry deals that has good upside potential if executed upon and leveraged?
Yes

Netapp already has a storage offering why do they need Engenio?
No offense to NetApp, however they have needed a robust block storage offering to complement their NAS file serving and extensive software functionality to move into to different markets. This is not all that different from what EMC needed to do in the late 90s extending their capabilities from their sole cash cow platform Symmetrix to acquire DG to have a mid range offering.

NetApp is risking $480M on a business with technologies that some see or say is on the decline, so why would they do such a thing?
Ok, lets set the technology topics aside, from a pure numbers perspective, lets take two scenarios and Im not a financial person so go easy on me please. What some financial people have told me with other deals is that its sometimes about getting a return on cash vs. it not doing anything. So with that and other things in mind, say NetApp just lets $480M sit in the bank, can they get 12 per cent or better interest? Probably not and if they can, I want the name of that bank. What that means is that for a five year period, if they could get that rate of return (12 percent), they would only make $824M-480M=$344M on the investment (I know, there are tax and other financial considerations however lets keep simple). Now lets take another scenario, assume that NetApp simply rides a decline of the business at say a 20 percent per year rate (how many business are growing or in storage declining at 20 percent per year?) for five years. That works out to about a $1.4B yield. Lets take a different scenario and assume that NetApp can simply maintain an annual run rate of $700-750M for that five years, that works out to around $3.66B-480M=$3.1B revenue or return on investment. In other words, even with some decline, over a five year period, the OEM business pays for the deal alone and perhaps helps funds investment in technology improvement with the business balance being positive upside.

Now both of those are extreme scenarios so lets take something more likely such as NetApp being able to simply maintain a 700-750M run rate by keeping some of the OEM business, finding new markets for challenge and OEM as well as direct, expanding footprint into their markets. Now that math gets even more interesting. Having said all of that, NetApp needs to keep investing in the business and products to get those returns which might help explain the relative low price to run rate.

Is this a good deal for NetApp?
IMHO yes, as long as NetApp does not screw it up. If NetApp can manage the business, invest in it, grow into new markets instead of simple cannibalization, they will have made a good deal similar to what EMC did with DG back in the late 90s. However NetApp needs to execute, leverage what they are buying, invest in it and pick up new business to make up for the declining business with some of the OEMs.

With several hundred thousand systems or controllers having been sold over the years (granted how many are actually running is your guess as good as mine), NetApp has a footprint to leverage with their other products. For example, should IBM, Dell or Oracle completely walk away from those installed footprints, NetApp can move in with firmware or other upgrades to support plus up sell with their NAS gateways to add value with compression, dedupe, etc.

What about NetApps acquisition track record?
Fair question although Im sure the NetApp faithful wont like it. NetApp has had their ups and downs with acquisitions (Topio, Decru, Spinaker, Onaro, etc), perhaps with this one like EMC in the late 90s who bought DG to overcome some rough up and down acquisitions can also get their mojo on. (See this post).While we are on the topic of acquisitions, NetApp recently bought Akorri and last year Bycast which they now call StorageGrid that has been OEMd in the past by IBM. Guess what storage was commonly used under the IBM servers running the Bycast software? If you guessed XIV you might want to take a mulligan or a do over. Btw, HP also has OEMd the Bycast software. If you are not familiar with Bycast and interested in automated movement, tiering, policy management, objects and other buzzwords, ping your favorite NetApp person as it is a diamond in the rough if leveraged beyond healthcare capabilities.

What does this mean for Xyratex and Dothill who are NetApp partners?
My guess is that for now, the general purpose enclosures would stay the same (e.g. Xyratex) until there is a business case to do something different. For the high density enclosures, that could be a different scenario. As for others, we will have to wait and see.

Will NetApp port OnTap into Engenio?
The easiest and fastest thing is to do what NetApp and Engenio OEM customers have already been doing, that is, place the Engenio arrays behind the NetApp fas vfiler. Note that Engenio has storage systems that speak SAS to HDDs and SSDs as well as able to speak SAS, iSCSI and FC to hosts or gateways. NetApp has also embraced SAS for back end storage, maybe we will see them leverage a SAS connection out of their filers in the future to SAS storage systems or shelves instead of FC loop?

Speaking of SAS host or server attached storage, guess what many cloud, MSP, high performance and other environment are using for storage on the back end of their clusters or scale out NAS systems?
Yup, SAS.

Guess what gap NetApp gets to fill joining Dell, HP, IBM and Oracle who can now give a choice of SAS, iSCSI or FC in addition or NAS?
Yup, SAS.

Care to guess what storage vendor we can expect to hear downplay SAS as a storage system to server or gateway technology?
Hmm

Is this all about SAS?
No

Will this move scare EMC?
No, EMC does not get scared, or at least that is what they tell me.

Will LSI buy Fusion IO who has or is filing their documents to IPO or someone else?
Your guess or speculation is better than mine. However LSI already has and is retaining their own PCIe SSD card.

Why only $480M for a business that did $705M in 2010?
Good question. There is risk in that if NetApp does not invest in the product, marketing, relationships that they will not see the previous annual run rate so it is not a straight annuity. Consequently NetApp is taking risk with the business and thus they should get the reward if they can run with it. Another reason is that there probably were not any investment bankers or brokers running up the price.

Why didnt Dell buy Engenio for $480M?
Good question, if they had the chance, they should have however it probably would not have been a good fit as Dell needs direct sales vs. OEM sales.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

From bits to bytes: Decoding Encoding

With networking, care should be taken to understand if a given speed or performance capacity is being specified in bits or bytes as well as in base 2 (binary) or base 10 (decimal).

Another consideration and potential point of confusion are line rates (GBaud) and link speed which can vary based on encoding and low level frame or packet size. For example 1GbE along with 1, 2, 4 and 8Gb Fibre Channel along with Serial Attached SCSI (SAS) use an 8b/10b encoding scheme. This means that at the lowest physical layer 8bits of data are placed into 10bits for transmission with 2 bits being for data integrity.

With an 8Gb link using 8b/10b encoding, 2 out of every 10 bits are overhead. To determine the actual data throughput for bandwidth, or, number of IOPS, frames or packets per second is a function of the link speed, encoding and baud rate. For example, 1Gb FC has a 1.0625 Gb per second speed which is multiple by the current generation so 8Gb FC or 8GFC would be 8 x 1.0625 = 8.5Gb per second.

Remember to factor in that encoding overhead (e.g. 8 of 10 bits are for data with 8b/10b) and usable bandwidth on the 8GFC link is about 6.8Gb per second or about 850Mbytes (6.8Gb / 8 bits) per second. 10GbE uses 64b/66b encoding which means that for every 64 bits of data, only 2 bits are used for data integrity checks thus less overhead.

What do all of this bits and bytes have to do with clouds and virtual data storage network?

Quite a bit when you consider what we have talked about the need to support more information processing, moving as well as storing in a denser footprint.

In order to support higher densities faster servers, storage and networks are not enough and require various approaches to reducing the data footprint impact.

What this means is for fast networks to be effective they also have to have lower overhead to avoid moving more extra data in the same amount of time instead using that capacity for productive work and data.

PCIe leverages multiple serial unidirectional point to point links, known as lanes, compared to traditional PCI that used a parallel bus based design. With traditional PCI, the bus width varied from 32 to 64 bits while with PCIe, the number of lanes combined with PCIe version and signaling rate determines performance. PCIe interfaces can have one, two, four, eight, sixteen or thirty two lanes for data movement depending on card or adapter format and form factor.  For example, PCI and PCIx performance can be up to 528 MByte per second with 64 bit 66 MHz signaling rate.

 

PCIe Gen 1

PCIe Gen 2

PCIe Gen 3

Giga transfers per second

2.5

5

8

Encoding scheme

8b/10b

8b/10b

128b/130b

Data rate per lane per second

250MB

500MB

1GB

x32 lanes

8GB

16GBs

32GB

Table 1: PCIe generation comparisons

Table 1 shows performance characteristics of PCIe various generations. With PCIe Gen 3, the effective performance essentially doubles however the actual underlying transfer speed does not double as it has in the past. Instead the improved performance is a combination of about 60 percent link speed and 40 percent efficiency improvements by switching from an 8b/10b to 128b/130b encoding scheme among other optimizations.

Serial interface

Encoding

PCIe Gen 1

8b/10b

PCIe Gen 2

8b/10b

PCIe Gen 3

128b/120b

Ethernet 1Gb

8b/10b

Ethernet 10Gb

64b/66b

Fibre Channel 1/2/4/8 Gb

8b/10b

SAS 6Gb

8b/10b

Table 2: Common encoding

Bringing this all together is that in order to support cloud and virtual computing environments, data networks need to become faster as well as more efficient otherwise you will be paying for more overhead per second vs. productive work being done. For example, with 64b/66b encoding on a 10GbE or FCoE link, 96.96% of the overall bandwidth or about 9.7Gb per second are available for useful work.

By comparison if an 8b/10b encoding were used, the result would be only 80% of available bandwidth for useful data movement. For environments or applications this means better throughput or bandwidth while for applications that require lower response time or latency it means more IOPS, frames or packets per second.

The above is an example of where a small change such as the encoding scheme can have large benefit when applied to high volume or large environments.

Learn more in The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC) at https://storageio.com/books

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

As the Hard Disk Drive HDD continues to spin

As the Hard Disk Drive HDD continues to spin

server storage data infrastructure i/o iop hdd ssd trends

Updated 2/10/2018

Despite having been repeatedly declared dead at the hands of some new emerging technology over the past several decades, the Hard Disk Drive (HDD) continues to spin and evolve as it moves towards its 60th birthday.

More recently HDDs have been declared dead due to flash SSD that according to some predictions, should have caused the HDD to be extinct by now.

Meanwhile, having not yet died in addition to having qualified for its AARP membership a few years ago, the HDD continues to evolve in capacity, smaller form factor, performance, reliability, density along with cost improvements.

Back in 2006 I did an article titled Happy 50th, hard drive, but will you make it to 60?

IMHO it is safe to say that the HDD will be around for at least a few more years if not another decade (or more).

This is not to say that the HDD has outlived its usefulness or that there are not other tiered storage mediums to do specific jobs or tasks better (there are).

Instead, the HDD continues to evolve and is complimented by flash SSD in a way that HDDs are complimenting magnetic tape (another declared dead technology) each finding new roles to support more data being stored for longer periods of time.

After all, there is no such thing as a data or information recession!

What the importance of this is about technology tiering and resource alignment, matching the applicable technology to the task at hand.

Technology tiering (Servers, storage, networking, snow removal) is about aligning the applicable resource that is best suited to a particular need in a cost as well as productive manner. The HDD remains a viable tiered storage medium that continues to evolve while taking on new roles coexisting with SSD and tape along with cloud resources. These and other technologies have their place which ideally is finding or expanding into new markets instead of simply trying to cannibalize each other for market share.

Here is a link to a good story by Lucas Mearian on the history or evolution of the hard disk drive (HDD) including how a 1TB device that costs about $60 today would have cost about a trillion dollars back in the 1950s. FWIW, IMHO the 1 trillion dollars is low and should be more around 2 to 5 trillion for the one TByte if you apply common costs for management, people, care and feeding, power, cooling, backup, BC, DR and other functions.

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

IMHO, it is safe to say that the HDD is here to stay for at least a few more years (if not decades) or at least until someone decides to try a new creative marketing approach by declaring it dead (again).

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Dude, is Dell doing a disk deal again with Compellent?

Over in Eden Prairie (Minneapolis Minnesota suburb) where data storage vendor Compellent (CML) is based, they must be singing in the hallways today that it is beginning to feel a lot like Christmas.

Sure we had another dusting of snow this morning here in the Minneapolis area and the temp is actually up in the balmy 20F temperature range (was around 0F yesterday) and holiday shopping is in full swing.

The other reason I think that the Compellent folks are thinking that it feels a lot like Christmas are the reports that Dell is in exclusive talks to buy them at about $29 per share or about $876 million USD.

Dell is no stranger to holiday or shopping sprees, check these posts out as examples:

Dell Will Buy Someone, However Not Brocade (At least for now)

Back to school shopping: Dude, Dell Digests 3PAR Disk storage (we now know Dell was out bid)

Data footprint reduction (Part 2): Dell, IBM, Ocarina and Storwize

Data footprint reduction (Part 1): Life beyond dedupe and changing data lifecycles

Post Holiday IT Shopping Bargains, Dell Buying Exanet?

Did someone forget to tell Dell that Tape is dead?

Now some Compellent fans are not going to be happy with only about $29 a share or about $876 million USD price given the recent stock run up into the $30 plus range. Likewise, some of the Compellent fans may be hoping for or expecting a bidding war to drive the stock back up into the $30 range however keep in mind that it was earlier this year when the stock adjusted itself down into the mid teens.

In the case of 3PAR and the HP Dell budding war, that was a different product and company focused in a different space than where Compellent has a good fit.

Sure both 3PAR and Compellent do Fibre Channel (FC) where Dells EqualLogic only does iSCSI, however a valuation based just on FC would be like saying Dell has all the storage capabilities they need with their MD3000 series that can do SAS, iSCSI and FC.

In other words, there are different storage products for different markets or price bands and customer application needs. Kind of like winter here in Minnesota, sure one type of shovel will work for moving snow or you can leverage different technologies and techniques (tiering) to get the job done effectively the same holds for storage solutions.

Compellent has a good Cadillac product that is a good fit for some SMB environments. However the SMB space is also where Dell has several storage products some of which they own (e.g. EqualLogic), some they OEM (MD3000 series and NX) as well as resell (e.g. EMC CLARiiON).

Can the Compellent product replace the lowered CLARiiON business that Dell has itself been shifting more to their flagship EqualLogic product?

Sure however at the risk of revenue cannibalization or worse, introduction of revenue prevention teams.

Can the Compellent product then be positioned lower down under the EqualLogic product?

Sure, however why hold it back not to mention force a higher priced product down into that market segment.

Can the Compellent product be taken up market to compete above the EqualLogic head to head with the larger CLARiiON systems from EMC or comparable solutions from other vendors?

Sure, however I can hear choruses of its sounding a lot like Christmas from New England, the bay area and Tucson among others.

Does this mean that Dell is being overly generous and that this is not a good deal?

No, not at all.

Sure it is the holiday season and Dell has several billion dollars of cash laying around however that in itself does not guarantee a large handout or government sized bailout (excuse me, infusion). At $30 or more, that would be overly generous simply based on where the technology fits as well as aligns to the market realities. Consequently, at $29, this is a great deal for Compellent and also for Dell.

Why is it a good deal for Dell?

I think that it is as much about Dell getting a good deal (ok, paying a premium) to acquire a competitor that they can use to fill some product gaps where they have common VARs. However I also think that this is very much about the channel and the VAR as much if not more than it is just about a storage product. Servers are part of the game here which in turn supports storage, networking, management tools, backup/recovery, archiving and services.

Sure Dell can maybe take some cost out of the Compellent solution by replacing the Supermicro PCs that are the hardware platform for their storage controllers with Dell servers. However the bigger play is around further developing its channel and VAR ecosystems, some of whom were with EqualLogic before Dell bought them. This can also be seen as a means of Dell getting that partner ecosystem to sell overall, more dell products and solutions instead of those from Apple, EMC, Futjisu, HP, IBM, Oracle and many others.

Likewise, I doubt that Mr. Dell is paying a premium simply to make the Compellent shareholders and fans happy to create monetary velocity to stimulate holiday shopping and economic stimulus. However, for the fans, sure, while drowning your sorrows in egg nogg of holiday cheer that you are not getting $30 or higher, instead buy a round for your mates and toast Dell for your holiday gift.

The real reason I think this is a good reason for Dell is that from a business and financial perspective, assuming they stick to the $29 range, it is a good bargain for both parties. Dell gets a company who has been competing with their EqualLogic product in some cases with the same VARs or resellers. Sure it gets a Fibre Channel based product however Dell already has that with the MD3000 series which I realize is less function laden then Compellent or EqualLogic; however it is also more affordable for a different market.

If Dell can close on the deal sticking to its offer which they have the upper hand on, execute including rolling out a strategy as well as product positioning plan. Then educate their own teams as well as VARs and customers of what products fit where and when in such a manner that does not cause revenue prevention (e.g. one product or team blocking the other) or cannibalization instead expanding markets, they can do well.

While Compellent gets a huge price multiple based on their revenue (about $125M USD), if Dell can get the product revenue up from the $125 to $150 million plateau to around $250 to $300 million without cannibalizing other Dell products, the deal pays for itself in many ways.

Keep in mind that a large pile of cash sitting in the bank these days is not exactly yielding the best returns on investment.

For the Compellent fans and shareholders, congratulations!

You have gotten or perhaps are about to get a good holiday gift so knock of the complaining that you should be getting more. The option is that instead of $28 per share, you could be getting 28 lumps of coal in your Christmas stocking.

For the Dell folks, assuming the deal is done on their terms and that they can quickly rationalize the product overlap, convey and then execute on a strategy while keeping the revenue prevention teams on the sidelines you too have a holiday gift to work with (some assembly will be required however). This also is good for Dell outside of storage which may turn out to be one of the gems of the deal in keeping or expanding VARs selling Dell based servers and associated technologies.

For EMC who was slapped in the face earlier this year when Dell took a run at 3PAR, sure there will be more erosion on the lower end CLARiiOn as has been occurring with the EqualLogic. However Dell still needs a solution to effectively compete with EMC and others at the higher end of the SMB or lower end of the enterprise market.

Sure the EqualLogic or Compellent products could be deployed into such scenarios; however those solutions are then playing on a different field and out of their market sweet spots.

Lets see what happens shall we.

In the meantime, what say you?

Is this a good deal for Dell, who is the deal good for assuming it goes through and at the terms mentioned, what is your take?

Who benefits from this proposed deal?

Note that in the holiday gift giving spirit, Chicago style voting or polling will be enabled.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Is the new HDS VSP really the MVSP?

Today HDS announced with much fan fare that must have been a million dollar launch budget the VSP (successor to the previous USPV and USPVM).

Im also thinking that the HDS VSP (not to be confused with HP SVSP that HP OEMs via LSI) could also be called the the HDS MVSP.

Now if you are part of the HDS SAN, LAN, MAN, WAN or FAN bandwagon, MVSP could mean Most Valuable Storage Platform or Most Virtualized Storage Product. MVSP might be also called More Virtualized Storage Products by others.

Yet OTOH, MVSP could be More Virtual Story Points (e.g. talking points) for HDS building upon and when comparing to their previous products.

For example among others:

More cache to drive cash movement (e.g. cash velocity or revenue)
More claims and counter claims of industry unique or fists
More cloud material or discussion topics
More cross points
More data mobility
More density
More FUD and MUD throwing by competitors
More functionality
More packets of information to move, manage and store
More pages in the media
More partitioning of resources
More partners to sell thorough or too
More PBytes
More performance and bandwidths
More platforms virtualized
More platters
More points of resiliency
More ports to connect to or through
More posts from bloggers
More power management, Eco and Green talking points
More press releases
More processors
More products to sell
More profits to be made
More protocols (Fibre Channel, FICON, FCoE, NAS) supported
More pundits praises
More SAS, SATA and SSD (flash drives) devices supported
More scale up, scale out, and scale within
More security
More single (Virtual and Physical) pane of glass managements
More software to sell and be licensed by customers
More use of virtualization, 3D and other TLAs
More videos to watch or be stored

Im sure more points can be thought of, however that is a good start for now including some to have a bit of fun with.

Read more about HDS new announcement here, here, here and here:

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

What is DFR or Data Footprint Reduction?

What is DFR or Data Footprint Reduction?

What is DFR or Data Footprint Reduction?

Updated 10/9/2018

What is DFR or Data Footprint Reduction?

Data Footprint Reduction (DFR) is a collection of techniques, technologies, tools and best practices that are used to address data growth management challenges. Dedupe is currently the industry darling for DFR particularly in the scope or context of backup or other repetitive data.

However DFR expands the scope of expanding data footprints and their impact to cover primary, secondary along with offline data that ranges from high performance to inactive high capacity.

Consequently the focus of DFR is not just on reduction ratios, its also about meeting time or performance rates and data protection windows.

This means DFR is about using the right tool for the task at hand to effectively meet business needs, and cost objectives while meeting service requirements across all applications.

Examples of DFR technologies include Archiving, Compression, Dedupe, Data Management and Thin Provisioning among others.

Read more about DFR in Part I and Part II of a two part series found here and here.

Where to learn more

Learn more about data footprint reducton (DFR), data footprint overhead and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

That is all for now, hope you find these ongoing series of current or emerging Industry Trends and Perspectives posts of interest.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Has FCoE entered the trough of disillusionment?

This is part of an ongoing series of short industry trends and perspectives blog posts briefs based on what I am seeing and hearing in my conversations with IT professionals on a global basis.

These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, videos, podcasts, webcasts as well as solution brief content found a www.storageioblog.com/reports and www.storageio.com/articles.

Has FCoE (Fibre Channel over Ethernet) entered the trough of disillusionment?

IMHO Yes and that is not a bad thing if you like FCoE (which I do among other technologies).

The reason I think that it is good that FCoE is in or entering the trough is not that I do not believe in FCoE. Instead, the reason is that most if not all technologies that are more than a passing fad often go through a hype and early adopter phase before taking a breather prior to broader longer term adoption.

Sure there are FCoE solutions available including switches, CNAs and even storage systems from various vendors. However, FCoE is still very much in its infancy and maturing.

Based on conversations with IT customer professionals (e.g those that are not vendor, vars, consultants, media or analysts) and hearing their plans, I believe that FCoE has entered the proverbial trough of disillusionment which is a good thing in that FCoE is also ramping up for deployment.

Another common question that comes up regarding FCoE as well as other IO networking interfaces, transports and protocols is if they are temporal (temporary short life span) technologies.

Perhaps in the scope that all technologies are temporary however it is their temporal timeframe that should be of interest. Given that FCoE will probably have at least a ten to fifteen year temporal timeline, I would say in technology terms it has a relative long life for supporting coexistence on the continued road to convergence which appears to be around Ethernet.

That is where I feel FCoE is at currently, taking a break from the initial hype, maturing while IT organizations begin planning for its future deployment.

I see FCoE as having a bright future coexisting with other complimentary and enabling technologies such as IO Virtualization (IOV) including PCI SIG MRIOV, Converged Networking, iSCSI, SAS and NAS among others.

Keep in mind that FCoE does not have to be seen as competitive to iSCSI or NAS as they all can coexist on a common DCB/CEE/DCE environment enabling the best of all worlds not to mention choice. FCoE along with DCB/CEE/DCE provides IT professionals with choice options (e.g. tiered I/O and networking) to align the applicable technology to the task at hand for physical or

Again, the questions pertaining to FCoE for many organizations, particularly those not going to iSCSI or NAS for all or part of their needs should be when, where and how to deploy.

This means that for those with long lead time planning and deployment cycles, now is the time to putting your strategy into place for what you will be doing over the next couple of years if not sooner.

For those interested, here is a link (may require registration) to a good conversation taking place over on IT Toolbox regarding FCoE and other related themes that may be of interest.

Here are some links to additional related material:

  • FCoE Infrastructure Coming Together
  • 2010 and 2011 Trends, Perspectives and Predictions: More of the same?
  • SNWSpotlight: 8G FC and FCoE, Solid State Storage
  • NetApp and Cisco roll out vSphere compatible FCoE solutions
  • Fibre Channel over Ethernet FAQs
  • Fast Fibre Channel and iSCSI switches deliver big pipes to virtualized SAN environments.
  • Poll: Networking Convergence, Ethernet, InfiniBand or both?
  • I/O Virtualization (IOV) Revisited
  • Will 6Gb SAS kill Fibre Channel?
  • Experts Corner: Q and A with Greg Schulz at StorageIO
  • Networking Convergence, Ethernet, Infiniband or both?
  • Vendors hail Fibre Channel over Ethernet spec
  • Cisco, NetApp and VMware combine for ‘end-to-end’ FCoE storage
  • FCoE: The great convergence, or not?
  • I/O virtualization and Fibre Channel over Ethernet (FCoE): How do they differ?
  • Chapter 9 – Networking with your servers and storage: The Green and Virtual Data Center (CRC)
  • Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier)

That is all for now, hope you find these ongoing series of current or emerging Industry Trends and Perspectives posts of interest.

Of course let me know what your thoughts and perspectives are on this and other related topics.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

August 2010 StorageIO News Letter

StorageIO News Letter Image
August 2010 Newsletter

Welcome to the August Summer Wrap Up 2010 edition of the Server and StorageIO Group (StorageIO) newsletter. This follows the June 2010 edition building on the great feedback received from recipients.
Items that are new in this expanded edition include:

  • Out and About Update
  • Industry Trends and Perspectives (ITP)
  • Featured Article

You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the August 2010 edition as an HTML or PDF or, to go to the newsletter page to view previous editions.

Follow via Goggle Feedburner here or via email subscription here.

You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio