Cloud and Virtual Data Storage Networking book released

Ok, it’s now official, following its debut at the VMworld 2011 book store last week in Las Vegas, my new book Cloud and Virtual Data Storage Networking (CRC Press) is now formally released with general availability announced today along with companion material located at https://storageioblog.com/book3 including the Cloud and Virtual Data Storage Networking LinkedIn group page launched a few months ago. Cloud and Virtual Data Storage Networking (CVDSN) a 370 page hard cover print is my third solo book that follows The Green and Virtual Data Center (CRC Press 2009) and Resilient Storage Networks (Elsevier 2004).

Cloud and Virtual Data Storage Networking Book by Greg Schulz
CVDSN book was on display at VMworld 2011 book store last week along with a new book by Duncan Epping (aka @DuncanYB ) and Frank Denneman (aka @frankdenneman ) titled VMware vSphere 5 Clustering Technical Deepdive. You can get your copy of Duncan and Franks new book on Amazon here.

Greg Schulz during book signing at VMworld 2011
Here is a photo of me on the left visiting a VMworld 2011 attendee in the VMworld book store.

 

Whats inside the book, theme and topics covered

When it comes to clouds, virtualization, converged and dynamic infrastructures Dont be scared however do look before you leap to be be prepared including doing your homework.

What this means is that you should do your homework, prepare, learn, and get involved with proof of concepts (POCs) and training to build the momentum and success to continue an ongoing IT journey. Identify where clouds, virtualization and data storage networking technologies and techniques compliment and enable your journey to efficient, effective and productive optimized IT services delivery.

 

There is no such thing as a data or information recession: Do more with what you have

A common challenge in many organizations is exploding data growth along with associated management tasks and constraints, including budgets, staffing, time, physical facilities, floor space, and power and cooling. IT clouds and dynamic infrastructure environments enable flexible, efficient and optimized, cost-effective and productive services delivery. The amount of data being generated, processed, and stored continues to grow, a trend that does not appear to be changing in the future. Even during the recent economic crisis, there has been no slow down or information recession. Instead, the need to process, move, and store data has only increased, in fact both people and data are living longer. CVDSN presents options, technologies, best practices and strategies for enabling IT organizations looking to do more with what they have while supporting growth along with new services without compromising on cost or QoS delivery (see figure below).

Driving Return on Innovation the new ROI: Doing more, reducing costs while boosting productivity

 

Expanding focus from efficiency and optimization to effectiveness and productivity

A primary tenant of a cloud and virtualized environment is to support growing demand in a cost-effective manner  with increased agility without compromising QoS. By removing complexity and enabling agility, information services can be delivered in a timely manner to meet changing business needs.

 

There are many types of information services delivery model options

Various types of information services delivery modes should be combined to meet various needs and requirements. These complimentary service delivery options and descriptive terms include cloud, virtual and data storage network enabled environments. These include dynamic Infrastructure, Public & Private and Hybrid Cloud, abstracted, multi-tenant, capacity on demand, Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) among others.

Convergence combing different technology domains and skill sets

Components of a cloud and virtual environment include desktop, servers, and storage, networking, hardware, and software, services along with APIs and software stacks. This include virtual and physical desktops, data, voice and storage networks, LANs, SANs, MANs, WANs, faster blade and rack servers with more memory, SSD and high-capacity storage and associated virtualization tools and management software. True convergence combines leveraging technology and people, processes and best practices aligned to make the most of those resources to deliver cost-effective services delivery.

 

Best people, processes, practices and products (the four Ps)

Bringing all the various components together is the Ps (people skill sets, process, practices and products). This means leveraging and enhancing people skill sets and experience, process and procedures to optimize workflow for streamlined service orchestration, practices and policies to be more effectively reducing waste without causing new bottlenecks, and products such as racks, stacks, hardware, software, and managed or cloud services.

 

Service categories and catalogs, templates SLO and SLA alignment

Establishing service categories aligned to known service levels and costs enables resources to be aligned to applicable SLO and SLA requirements. Leveraging service templates and defined policies can enable automation and rapid provisioning of resources including self-service requests.

 

Navigating to effective IT services delivery: Metrics, measurements and E2E management

You cannot effectively manage what you do not know about; likewise, without situational awareness or navigation tools, you are flying blind. E2E (End to End) tools can provide monitoring and usage metrics for reporting and accounting, including enabling comparison with other environments. Metrics include customer service satisfaction, SLO and SLAs, QoS, performance, availability and costs to service delivered.

 

The importance of data protection for virtual, cloud and physical environments

Clouds and virtualization are important tools and technologies for protecting existing consolidated or converged as well as traditional environments. Likewise, virtual and cloud environments or data placed there also need to be protected. Now is the time to rethink and modernize your data protection strategy to be more effective, protecting, preserving and serving more data for longer periods of time with less complexity and cost.

 

Packing smart and effectively for your journey: Data footprint reduction (DFR)

Reducing your data footprint impact leveraging data footprint reduction (DFR) techniques, technologies and best practices is important for enabling an optimized, efficient and effective IT services delivery environment. Reducing your data footprint is enabled with clouds and virtualization providing a means and mechanism for archiving inactive data and for transparently moving it. On the other hand, moving to a cloud and virtualized environment to do more with what you have is enhanced by reducing the impact of your data footprint. The ABCDs of data footprint reduction include Archiving, Backup modernization, Compression and consolidation, Data management and dedupe along with Storage tiering and thin provisioning among other techniques.

Cloud and Virtual Data Storage Networking book by Greg Schulz

How the book is laid out:

  • Table of content (TOC)
  • How the book is organized and who should read it
  • Preface
  • Section I: Why the need for cloud, virtualization and data storage networks
  • Chapter 1: Industry trends and perspectives: From issues and challenges to opportunities
  • Chapter 2: Cloud, virtualization and data storage networking fundamentals
  • Section II: Managing data and resources: Protect, preserve, secure and serve
  • Chapter 3: Infrastructure Resource Management (IRM)
  • Chapter 4: Data and storage networking security
  • Chapter 5: Data protection (Backup/Restore, BC and DR)
  • Chapter 6: Metrics and measurement for situational awareness
  • Section III: Technology, tools and solution options
  • Chapter 7: Data footprint reduction: Enabling cost-effective data demand growth
  • Chapter 8: Enabling data footprint reduction: Storage capacity optimization
  • Chapter 9: Storage services and systems
  • Chapter 10: Server virtualization
  • Chapter 11: Connectivity: Networking with your servers and storage
  • Chapter 12: Cloud and solution packages
  • Chapter 13: Management and tools
  • Section IV: Putting IT all together
  • Chapter 14: Applying what you have learned
  • Chapter 15: Wrap-up, what’s next and book summary
  • Appendices:
  • Where to Learn More
  • Index and Glossary

Here is the release that went out via Business Wire (aka Bizwire) earlier today.

 

Industry Veteran Greg Schulz of StorageIO Reveals Latest IT Strategies in “Cloud and Virtual Data Storage Networking” Book
StorageIO Founder Launches the Definitive Book for Enabling Cloud, Virtualized, Dynamic, and Converged Infrastructures

Stillwater, Minnesota – September 7, 2011  – The Server and StorageIO Group (www.storageio.com), a leading independent IT industry advisory and consultancy firm, in conjunction with  publisher CRC Press, a Taylor and Francis imprint, today announced the release of “Cloud and Virtual Data Storage Networking,” a new book by Greg Schulz, noted author and StorageIO founder. The book examines strategies for the design, implementation, and management of hardware, software, and services technologies that enable the most advanced, dynamic, and flexible cloud and virtual environments.

Cloud and Virtual Data Storage Networking

The book supplies real-world perspectives, tips, recommendations, figures, and diagrams on creating an efficient, flexible and optimized IT service delivery infrastructures to support demand without compromising quality of service (QoS) in a cost-effective manner. “Cloud and Virtual Data Storage Networking” looks at converging IT resources and management technologies to facilitate efficient and effective delivery of information services, including enabling information factories. Schulz guides readers of all experience levels through various technologies and techniques available to them for enabling efficient information services.

Topics covered in the book include:

  • Information services model options and best practices
  • Metrics for efficient E2E IT management and measurement
  • Server, storage, I/O networking, and data center virtualization
  • Converged and cloud storage services (IaaS, PaaS, SaaS)
  • Public, private, and hybrid cloud and managed services
  • Data protection for virtual, cloud, and physical environments
  • Data footprint reduction (archive, backup modernization, compression, dedupe)
  • High availability, business continuance (BC), and disaster recovery (DR)
  • Performance, availability and capacity optimization

This book explains when, where, with what, and how to leverage cloud, virtual, and data storage networking as part of an IT infrastructure today and in the future. “Cloud and Virtual Data Storage Networking” comprehensively covers IT data storage networking infrastructures, including public, private and hybrid cloud, managed services, virtualization, and traditional IT environments.

“With all the chatter in the market about cloud storage and how it can solve all your problems, the industry needed a clear breakdown of the facts and how to use cloud storage effectively. Greg’s latest book does exactly that,” said Greg Brunton of EDS, an HP company.

Click here to listen and watch Schulz discuss his new book in this Video about Cloud and Virtual Data Storage Networking book by Greg Schulz video.

About the Book

Cloud and Virtual Data Storage Networking has 370 pages, with more than 100 figures and tables, 15 chapters plus appendices, as well as a glossary. CRC Press catalog number K12375, ISBN-10: 1439851735, ISBN-13: 9781439851739, publication September 2011. The hard cover book can be purchased now at global venues including Amazon, Barnes and Noble, Digital Guru and CRCPress.com. Companion material is located at https://storageioblog.com/book3 including images, additional information, supporting site links at CRC Press, LinkedIn Cloud and Virtual Data Storage Networking group, and other books by the author. Direct book editorial review inquiries to John Wyzalek of CRC Press at john.wyzalek@taylorfrancis.com (twitter @jwyzalek) or +1 (917) 351-7149. For bulk and special orders contact Chris Manion of CRC Press at chris.manion@taylorandfrancis.com or +1 (561) 998-2508. For custom, derivative works and excerpts, contact StorageIO at info@storageio.com.

About the Author

Greg Schulz is the founder of the independent IT industry advisory firm StorageIO. Before forming StorageIO, Schulz worked for several vendors in systems engineering, sales, and marketing technologist roles. In addition to having been an analyst, vendor and VAR, Schulz also gained real-world hands on experience working in IT organizations across different industry sectors. His IT customer experience spans systems development, systems administrator, disaster recovery consultant, and capacity planner across different technology domains, including servers, storage, I/O networking hardware, software and services. Today, in addition to his analyst and research duties, Schulz is a prolific writer, blogger, and sought-after speaker, sharing his expertise with worldwide technology manufacturers and resellers, IT users, and members of the media. With an insightful and thought-provoking style, Schulz is also author of the books “The Green and Virtual Data Center” (CRC Press, 2009) which is on the Intel developers recommended reading list and the SNIA-endorsed reading book “Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures” (Elsevier, 2004). Schulz is available for interviews and commentary, briefings, speaking engagements at conferences and private events, webinars, video and podcast along with custom advisory consultation sessions. Learn more at https://storageio.com.

End of press release.

Wrap up

I want to express thanks to all of those involved with the project that spanned over the past year.

Stayed tuned for more news and updates pertaining to Cloud and Virtual Data Storage Networking along with related material including upcoming events as well as chapter excerpts. Speaking of events, here is information on an upcoming workshop seminar that I will be involved with for IT storage and networking professionals to be held October 4th and 5th in the Netherlands.

You can get your copy now at global venues including Amazon, Barnes and Noble, Digital Guru and CRCPress.com.

Ok, nuff said, for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

StorageIO going Dutch again: October 2011 Seminar for storage professionals

Greg Schulz of StorageIO in conjunction with or dutch partner Brouwer Storage Consultancy will be presenting a two day workshop seminar for IT storage, virtualization, and networking professionals Monday 3rd and Tuesday 4th of October 2011 at Ampt van Nijkerk Netherlands.

Brouwer Storage ConsultanceyThe Server and StorageIO Group

This two day interactive education seminar for storage professionals will focus on current data and storage networking trends, technology and business challenges along with available technologies and solutions. During the seminar learn what technologies and management techniques are available, how different vendors solutions compare and what to use when and where. This seminar digs into the various IT tools, techniques, technologies and best practices for enabling an efficient, effective, flexible, scalable and resilient data infrastructure.

The format of this two seminar will be a mix of presentation and interactive discussion allowing attendees plenty of time to discuss among themselves and with seminar presenters. Attendees will gain insight into how to compare and contrast various technologies and solutions in addition to identifying and aligning those solutions to their specific issues, challenges and requirements.

Major themes that will be discussed include:

  • Who is doing what with various storage solutions and tools
  • Is RAID still relevant for today and tomorrow
  • Are hard disk drives and tape finally dead at the hands of SSD and clouds
  • What am I routinely hearing, seeing or being asked to comment on
  • Enabling storage optimization, efficiency and effectiveness (performance and capacity)
  • Opportunities for leveraging various technologies, techniques,trends
  • Supporting virtual servers including re-architecting data protection
  • How to modernize data protection (backup/restore, BC, DR, replication, snapshots)
  • Data footprint reduction (DFR) including archive, compression and dedupe
  • Clarifying cloud confusion, don’t be scared, however look before you leap
  • Big data, big bandwidth and virtual desktop infrastructures (VDI)

In addition this two day seminar will look at what are some new and improved technologies and techniques, who is doing what along with discussions around industry and vendor activity including mergers and acquisitions. In addition to seminar handout materials, attendees will also receive a copy Cloud and Virtual Data Storage Networking (CRC Press) by Greg Schulz that looks at enabling efficient, optimized and effective information services delivery across cloud, virtual and traditional environments.

Cloud and Virtual Data Storage Networking Book

Buzzwords and topic themes to be discussed among others include E2E, FCoE and DCB, CNAs, SAS, I/O virtualization, server and storage virtualization, public and private cloud, Dynamic Infrastructures, VDI, RAID and advanced data protection options, SSD, flash, SAN, DAS and NAS, object storage, big data and big bandwidth, backup, BC, DR, application optimized or aware storage, open storage, scale out storage solutions, federated management, metrics and measurements, performance and capacity, data movement and migration, storage tiering, data protection modernization, SRA and SRM, data footprint reduction (archive, compress, dedupe), unified and multi-protocol storage, solution bundle and stacks.

For more information or to register contact Brouwer Storage Consultancy

Brouwer Storage Consultancy
Olevoortseweg 43
3861 MH Nijkerk
The Netherlands
Telephone: +31-33-246-6825
Cell: +31-652-601-309
Fax: +31-33-245-8956
Email: info@brouwerconsultancy.com
Web: www.brouwerconsultancy.com

Brouwer Storage Consultancey

Learn about other events involving Greg Schulz and StorageIO at www.storageio.com/events

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Supporting IT growth demand during economic uncertain times

Doing more with less, doing more with what you have or reducing cost have been the mantra for the past several years now.

Does that mean as a trend, they are being adopted as the new way of doing business, or simply a cycle or temporary situation?

Reality is that many if not most IT organizations are and will remain under pressure to stretch their budgets further for the immediate future. Over the past year or two some organizations saw increases in their budgets however also increased demand while others saw budgets fixed or reduced while having to support growth. On the other hand, there is no such thing as an information recession with more data being generated, moved, processed, stored and retained for longer periods of time.

Industry trend: No such thing as a data recession

Something has to give as shown in the following figure which is that on one curve there is continued demand and growth, while another curve shows need to reduce costs while another reflects the importance of maintaining or enhancing service level objectives (SLOs) and quality of service (QoS).

Enable growth while removing complexity and cost without compromising service levels

One way to reduce costs is to inhibit growth while another is to support growth by sacrificing QoS including performance, response time or availability as a result of over consolidation, excessive utilization or instability as a result of stretching resources to far. Where innovation comes into play is finding and fixing problems vs. moving or masking them or treating symptoms vs. the real issue and challenge. Innovation also comes into play by identifying both near term tactical as well as longer term strategic means of taking complexity and cost out of service delivery and the resources needed to support them. For example determining the different resources and processes involved in delivering an email box of a given size and reliability. Another being supporting a virtual machine (VM) with a given performance and capacity capability. Yet another scenario is a file share or home directory of a specific size and availability. By streamlining work flows, leveraging automation and other tools to enforce polices as well as adopting new best practices complexity and thereby costs can be reduced. The net rest is a lower cost to provide a given service to a specific level which when multiplied out over many users or instances, results in cost savings however also productivity gains.

The above is all good and well for longer term strategic and where you want to go or get to, however what can be done right now today?

Here are a few tips to do more with what you have while supporting growth demands

If you have service level agreements (SLAs) and SLOs as part of your service category, review with your users as to what they need vs. what they would like to have. What you may find is that your users want or expect a given level of service, yet would be happy and ok with moving to a cloud service that had lower SLO and SLA expectations if lower cost. The previous scenario would be an indicator that you users want and thus you give them a higher level of service, yet their requirements are actually lower than what is expected. On the other hand if you do not have SLOs and SLAs aligned with cost for the services then set them up and review customer or client expectations, needs vs. wants on a regular basis. You might find out that you can stretch your budget by delivering a lower (or higher) class of services to meet different users requirements than what was assumed to be the case. In the case of supporting a better class of service, if you can use an SSD enabled solution to reduce latency or wait times and boost productivity, more transactions or page views or revenue per hour, that could prompt a client to request that capability to meet their business needs.

Reduce your data footprint impact in order to support growth using the ABCDs of data footprint reduction (DFR), that is Archive (email, file, database), Backup modernization, Compression and consolidation, Data management and dedupe, storage tiering among other techniques.

Storage, server virtualization and optimization using capacity consolidation where practical and IO consolidation to fast storage and SSD where possible. Also review storage configuration including RAID and allocation to identity if any relatively easy changes can improve performance, availability, capacity and energy impact.

Investigate available upgrades and enhancements to your existing hardware, software and services that can be applied to provide breathing room within current budgets while evaluating new technologies.

Find and fix problems vs. chasing false positives that provide near term relief only to have the real issue reappear. Maximize your budgets by identifying where people time and other resources are being spent due to processes, work flows, technology configuration complexity or bottlenecks and address those.

Enhance and leverage existing management measurements to gain more insight along with implementing new metrics for End to End (E2E) situational awareness of your environment which will enable effective decision making. For example you may be told to move some function to the cloud as it will be cheaper, yet if you do not have metrics to indicate one way or the other, how can that be an informed decision? If you have metrics that show your cost for the same service being moved to a cloud or managed service provider as well as QoS, SLO, SLA, RTO, RPO and other TLAs, then you can make informed decisions. That decision may still be to move functions to a cloud or other service even if in fact it is more expensive compared to what you can provide it for in order that your resources can be directed to supporting other important internal functions.

Look for ways to reduce cost of a service delivered as opposed to simply cutting costs. They sound like one and the same, however if you have metrics and measurements providing situational awareness to know what the cost of a service is, you can also then look at how to streamline those services, remove complexity, reduce workflow, leverage automation there by removing cost. The goal is the same, however how you go about removing cost can have an impact on your return on innovation not to mention customer satisfaction.

Also be an informed shopper, have a forecast or plan on what you will need and when, along with what you must have (core requirements) vs. what you would like to have or want. When looking at options, balance what is needed and then if you can get what you want or would like for little or no extra cost if they add value or enable other initiatives. Part of being an informed shopper is having support of the business to be able to procure what you want or need which means aligning technology resources and their cost to delivery of business functions and services.

What you need vs. what you want
In a recent interview with the associated press (AP) the reporter wanted to know my comments about spending vs. saving during economic tough times (you can read the story here). Basically my comments were to spend within your means by identifying what you need vs. what you want, what is required to keep the business running or improve productivity and remove cost as opposed to acquiring nice to have things that can wait. Sure I would like to have a new 85 to 120" 3D monitor for my workstation that could double as a TV, however I do not need or require it.

On the other hand, I recently upgraded an existing workstation adding a Hybrid Hard Disk Drive (HHDD) and some additional memory, about a $200USD investment that is already paying for itself via increased productivity. That is instead of enjoying a cup of dunkin donut coffee while waiting for some tasks to complete on that system, Im able to get more done in a given amount of time boosting productivity.

For IT environments this means looking at expenditures to determine what is needed or required to keep things running while supporting near term strategic and tactical initiatives or pet projects.

For vendors and vars, if things have not been a challenge yet, now they will need to refine their messages to show more value, return on innovation (ROI) in terms of how to help their customers or prospects stretch resources (budgets, people, skill sets, products, services, licenses, power and cooling, floor space) further to support growth, while removing costs without compromising on service delivery. This also means a shift in thinking of short term or tactical cost cutting to longer term strategic approaches of reducing costs to deliver a service or resources.

Related links pertaining to stretching your resources, doing more with what you have, increasing productivity and maximizing your budget to support growth without compromising on customer service.

Saving Money with Green IT: Time To Invest In Information Factories
Storage Efficiency and Optimization – The Other Green
Shifting from energy avoidance to energy efficiency
Saving Money with Green Data Storage Technology
Green IT Confusion Continues, Opportunities Missed!
Storage Efficiency and Optimization – The Other Green
PUE, Are you Managing Power, Energy or Productivity?
Cloud and Virtual Data Storage Networking
Is There a Data and I/O Activity Recession?
More Data Footprint Reduction (DFR) Material

What is your take?

Are you and your company going into a spending freeze mode, or are you still spending, however placing or having constraints put on discretionary spending?

How are you stretching your IT budget to go further?

 

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Measuring Windows performance impact for VDI planning

Here is a link to a recent guest post that I was invited to do over at The Virtualization Practice (TVP) pertaining to measuring the impact of Windows Boot performance and what that means for planning for Virtual Desktop Infrastructure (VDI) initiatives.

With Virtual Desktop Infrastructures (VDI) initiatives adoption being a popular theme associated with cloud and dynamic infrastructure environments a related discussion point is the impact on networks, servers and storage during boot or startup activity to avoid bottlenecks. VDI solution vendors include Citrix, Microsoft and VMware along with various server, storage, networking and management tools vendors.

A common storage and network related topic involving VDI are boot storms when many workstations or desktops all startup at the same time. However any discussion around VDI and its impact on networks, servers and storage should also be expanded from read centric boots to write intensive shutdown or maintenance activity as well.

Having an understanding of what your performance requirements are is important to adequately design a configuration that will meet your Quality of Service (QoS) and service level objectives (SLOs) for VDI deployment in addition to knowing what to look for in candidate server, storage and networking technologies. For example, knowing how your different desktop applications and workloads perform on a normal basis provides a baseline to compare with during busy periods or times of trouble. Another benefit is that when shopping for example storage systems and reviewing various benchmarks, knowing what your actual performance and application characteristics are helps to align the applicable technology to your QoS and SLO needs while avoiding apples to oranges benchmark comparisons.

Check out the entire piece including some test results using the hIOmon tool from hyperIO to gather actual workstation performance numbers.

Keep in mind that the best benchmark is your actual applications running as close to possible to their typical workload and usage scenarios.

Also keep in mind that fast workstations need fast networks, fast servers and fast storage.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

NetApp and Akorri: An E2E cross technology domain SRA play

The other day NetApp announced that it was planning on doing another acquisition following on their recent purchase of Bycast (policy based storage and management software).

This time, NetApp is doing yet another software acquisition of Infrastructure Resource Management (IRM) as well as End to End (E2E) cross technology domain management and Storage or Systems Resource Analysis (SRA) startup Akorri which also builds on its past acquisition of SRA solution Onaro.

Is this a good move by NetApp?

Assuming they got a good price, yes, this has very potential for NetApp assuming they can assimilate the solution as well as articulate where it fits complimenting its other management tools including SANscreen (aka Onaro).

Is Akorii a good product?

Yes, most of the customers and var partners of Akorri that I talk to have great things to say and having looked into the technology, it has lots of good potential for NetApp. However, there is a common theme around Akorri that has been its high price, something that was also heard from Onaro customers before NetApp did that acquisition. If NetApp can leverage its direct as well as partner touch to reduce the cost of sale for Akorri as well as rationalize the pricing or at least better articulate the value proposition to make it a must have vs nice to have, they can do well.

The importance of E2E awareness of IT resources across different technology domains (or focus areas) is that you can not effectively manage what you do not have timely access or visibility into. Hence the theme of session being You cannot effectively manage what you do not know about in a timely manner. I recently did a couple of Industry Trends and Perspectives webcast events around the topic and themes of End to End (E2E) awareness and cross domain (or technology) management insight for cloud, virtual and other abstracted as well as physical IT environments.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

E2E Awareness and insight for IT environments

I recently did a couple of Industry Trends and Perspectives webcast events around the topic and themes of End to End (E2E) awareness and cross domain (or technology) management insight for cloud, virtual and other abstracted as well as physical IT environments.

The importance of E2E awareness of IT resources across different technology domains (or focus areas) is that you can not effectively manage what you do not have timely access or visibility into. Hence the theme of session being You cannot effectively manage what you do not know about in a timely manner.

Here is the abstract for the webcast:

Virtualization, clouds and other forms of abstraction help IT organizations enable flexible and scalable services delivery. While abstraction of underlying resources simplifies services delivery from an IT customers perspective, additional layers of technology along with interdependencies still need to be tracked as well as managed.  A key enabler for IT organizations is having end to end (E2E) situational awareness of available resources and how they are being used. By having timely situational awareness across various technology domains, IT organizations gain insight into how resources can be more effectively deployed in an efficient manner.

Join independent IT industry analyst, author and blogger Greg Schulz as he looks at common challenges as well as opportunities for leveraging E2E situational awareness to remove blind spots from efficient effective IT services delivery. Greg will look several scenarios including among others cost reduction, maximize resource usage, shrink migration and data consolidation times for cloud, virtual and traditional IT environments while maintaining or enhancing IT services delivery.

If you are interested in IT Infrastructure Resource Management (IRM) of servers, storage, IO networking, virtualization, cloud, backup or restore, optimization as well as cloud or legacy environments and metrics, I invite you to view the following web cast.

E2E cross domain awareness webcast

Click on the above image to access the BrightTalk web cast from their recent Virtualization Summit series (may require registration)

If you are interested, here is a link to a previous post I did on E2E management, SRA (systems or storage resource analysis) and management insight along with a recent related white paper sponsored by SANpulse that you can access here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

End to End (E2E) Systems Resource Analysis (SRA) for Cloud and Virtual Environments

A new StorageIO Industry Trends and Perspective (ITP) white paper titled “End to End (E2E) Systems Resource Analysis (SRA) for Cloud, Virtual and Abstracted Environments” is now available at www.storageioblog.com/reports compliments of SANpulse technologies.

End to End (E2E) Systems Resource Analysis (SRA) for Virtual, Cloud and abstracted environments: Importance of Situational Awareness for Virtual and Abstracted Environments

Abstract:
Many organizations are in the planning phase or already executing initiatives moving their IT applications and data to abstracted, cloud (public or private) virtualized or other forms of efficient, effective dynamic operating environments. Others are in the process of exploring where, when, why and how to use various forms of abstraction techniques and technologies to address various issues. Issues include opportunities to leverage virtualization and abstraction techniques that enable IT agility, flexibility, resiliency and salability in a cost effective yet productive manner.

An important need when moving to a cloud or virtualized dynamic environment is to have situational awareness of IT resources. This means having insight into how IT resources are being deployed to support business applications and to meet service objectives in a cost effective manner.

Awareness of IT resource usage provides insight necessary for both tactical and strategic planning as well as decision making. Effective management requires insight into not only what resources are at hand but also how they are being used to decide where different applications and data should be placed to effectively meet business requirements.

Learn more about the importance and opportunities associated with gaining situational awareness using E2E SRA for virtual, cloud and abstracted environments in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of SANpulse technologies by clicking here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Data footprint reduction (Part 2): Dell, IBM, Ocarina and Storwize

Dell

IBM

Over the past couple of weeks there has been a flurry of IT industry activity around data footprint impact reduction with Dell buying Ocarina and IBM acquiring Storwize. For those who want the quick (compacted, reduced) synopsis of what Dell buying Ocarina as well as IBM acquiring Storwize means read the first post in this two part series as well as some of my comments here and here.

This piece and it companion in part I of this two part series is about expanding the discussion to the much larger opportunity for vendors or vars of overall data footprint impact reduction beyond where they are currently focused. Likewise, this is about IT customers realizing that there are more opportunities to address data and storage optimization across your entire organization using various techniques instead of just focusing on backup or vmware virtual servers.

Who is Ocarina and Storwize?
Ocarina is a data and storage management software startup focused on data footprint reduction using a variety of approaches, techniques and algorithms. They differ from the traditional data dedupers (e.g. Asigra, Bakbone, Commvault, EMC Avamar, Datadomain and Networker, Exagrid, Falconstor, HP, IBM Protectier and TSM, Quantum, Sepaton and Symantec among others) by looking at data footprint reduction beyond just backup.

This means looking at how to reduce data footprint across different types of data including videos, image as well as text based documents among others. As a result, the market sweet spot for Ocarina is for general data footprint reduction including static along with active data including entertainment, video surveillance or gaming, reference data, web 2.0 and other bulk storage application data needs (this should compliment Dells recent Exanet acquisition).

What this means is that Ocarina is very well suited to address the rapidly growing amount of unstructured data that may not otherwise be handled as efficiently with by dedupe alone.

Storwize is a data and storage management startup focused on data footprint reduction using inline compression with an emphasis on maintaining performance for reads as well as writes of unstructured as well as structured database data. Consequently the market sweet spot for Storwize is around boosting the capacity of existing NAS storage systems from different vendors without negatively impacting performance. The trade off of the Storwize approach is that you do not get the spectacular data reduction ratios associated with backup centric or focused dedupe, however, you maintain performance associated with online storage that some dedupers dream of.

Both Dell and IBM have existing dedupe solutions for general purpose as well as backup along with other data footprint impact reduction tools (either owned or via partners). Now they are both expanding their focus and reach similar to what others such as EMC, HP, NetApp, Oracle and Symantec among others are doing. What this means is that someone at Dell and IBM see that there is much more to data footprint impact reduction than just a focus on dedupe for backup.

Wait, what does all of this discussion (or read here for background issues, challenges and opportunities) about unstructured data and changing access lifecycles have to do with dedupe, Ocarina and Storwize?

Continue reading on as this is about the expanding opportunity for data footprint reduction across entire organizations. That is, more data is being kept online and expanding data footprint impact needs to be addressed to meet business objectives using various techniques balancing performance, availability, capacity and energy or economics (PACE).

Dell

IBM

What does all of this have to do with IBM buying Storwize and Dell acquiring Ocarina?
If you have not pieced this together yet, let me net it out.

This is about the opportunity to address the organization wide expanding data footprint impact across all applications, types of data as well as tiers of storage to support business growth (more data to store) while maintaining QoS yet reduce per unit costs including management.

This is about expanding the story to the broader data footprint impact reduction from the more narrowly focused backup and dedupe discussion which are still in their infancy on a relative basis to their full market potential (read more here).

Now are you seeing where this is going and fits?

Does this mean IBM and Dell defocus on their existing Dedupe product lines or partners?
I do not believe so, at least as long as their respective revenue prevention departments are kept on the sidelines and off of the field of play. What I mean by this is that the challenge for IBM and Dell is similar to that of what others such as EMC are faced with having diverse portfolios or technology toolboxes. The challenge is messaging to the bigger issues, then aligning the right tool to the task at hand to address given issues and opportunities instead of singularly focused on a specific product causing revenue prevention elsewhere.

As an example, for backup, I would expect Dell to continue to work with its existing dedupe backup centric partners and technologies however find new opportunities to leverage their Ocarina solution. Likewise, IBM I would expect to continue to show customers where Tivoli software based dedupe or Protectier (aka the deduper formerly known as Diligent) or other target based dedupe fits and expand into other data footprint impact areas with Storewize.

Does this change the playing field?
IMHO these moves as well as some previous moves by the likes of EMC and NetApp among others are examples of expanding the scope and dimension of the playing field. That is, the focus is much more than just dedupe for backup or of virtual machines (e.g. VMware vSphere or Microsoft HyperV).

This signals a growing awareness around the much larger and broader opportunity around organization wide data footprint impact reduction. In the broader context some applications or data gets compressed either in application software such as databases, file systems, operating systems or even hypervisors as well as in networks using protocol or bandwidth optimizers as well as inline compression or post processing techniques as has been the case with streaming tape devices for some time.

This also means that where with dedupe the primary focus or marketing angle up until recently has been around reduction ratios, to meet the needs of time or performance sensitive applications data transfer rates also become important.

Hence the role of policy based data footprint reduction where the right tool or technique to meet specific service requirements is applied. For those vendors with a diverse data footprint impact reduction tool kit including archive, compression, dedupe, thin provision among other techniques, I would expect to hear expanded messaging around the theme of applying the right tool to the task at hand.

Does this mean Dell bought Ocarina to accessorize EqualLogic?
Perhaps, however that would then beg the question of why EqualLogic needs accessorizing. Granted there are many EqualLogic along with other Dell sold storage systems attached to Dell and other vendors servers operating as NFS or Windows CIFS file servers that are candidates for Ocarina. However there are also many environments that do not yet include Dell EqualLogic solutions where Ocarina is a means for Dell to extend their reach enabling those organizations to do more with what they have while supporting growth.

In other words, Ocarina can be used to accessorize, or, it can be used to generate and create pull through for various Dell products. I also see a very strong affinity and opportunity for Dell to combine their recent Exanet NAS storage clustering software with Dell servers, storage to create bulk or scale out solutions similar to what HP and other vendors have done. Of course what Dell does with the Ocarina software over time, where they integrate it into their own products as well as OEM to others should be interesting to watch or speculate upon.

Does this mean IBM bought Storwize to accessorize XIV?
Well, I guess if you put a gateway (or software on a server which is the same thing) in front of XIV to transform it into a NAS system, sure, then Storwize could be used to increase the net usable capacity of the XIV installed base. However that is a lot of work and cost for what is on a relative basis a small footprint, yet it is a viable option never the less.

IMHO IBM has much more of a play, perhaps a home run by walking before they run by placing Storwize in front of their existing large installed base of NetApp N series (not to mention targeting NetApps own install base) as well as complimenting their SONAS solutions. From there as IBM gets their legs and mojo, they could go on the attack by going after other vendors NAS solutions with an efficiency story similar to how IBM server groups target other vendors server business for takeout opportunities except in a complimenting manner.

Longer term I would not be surprised to see IBM continue development of the block based IP (as well as file) in the storwize product for deployment in solutions ranging from SVC to their own or OEM based products along with articulating their comprehensive data footprint reduction solution portfolio. What will be important for IBM to do is articulating what solution to use when, where, why and how without confusing their customers, partners and rest of the industry (something that Dell will also have to do).

Some links for additional reading on the above and related topics

Wrap up (for now)

Organizations of all shape and size are encountering some form of growing data footprint impact that currently, or soon will need to be addressed. Given that different applications and types of data along with associated storage mediums or tiers have various performance, availability, capacity, energy as well as economic characteristics multiple data footprint impact reduction tools or techniques are needed. What this all means is that the focus of data footprint reduction is expanding beyond that of just dedupe for backup or other early deployment scenarios.

Note what this means is that dedupe has an even brighter future than where it currently is focused which is still only scratching the surface of potential market adoption as was discussed in part 1 of this series.

However this also means that dedupe is not the only solution to all data footprint reduction scenarios. Other techniques including archiving, compression, data management, thin provisioning, data deletion, tiered storage and consolidation will start to gain respect, coverage discussions and debates.

Bottom line, use the most applicable technologies or combinations along with best practice for the task and activity at hand.

For some applications reduction ratios are an important focus on the tools or modes of operations that achieve those results.

Likewise for other applications where the focus is on performance with some data reduction benefit, tools are optimized for performance first and reduction secondary.

Thus I expect messaging from some vendors to adjust (expand) to those capabilities that they have in their toolboxes (product portfolios) offerings

Consequently, IMHO some of the backup centric dedupe solutions may find themselves in niche roles in the future unless they can diversity. Vendors with multiple data footprint reduction tools will also do better than those with only a single function or focused tool.

However for those who only have a single or perhaps a couple of tools, well, guess what the approach and messaging will be. After all, if all you have is a hammer everything looks like a nail, if all you have is a screw driver, well, you get the picture.

On the other hand, if you are still not clear on what all this means, send me a note, give a call, post a comment or a tweet and will be happy to discuss with you.

Oh, FWIW, if interested, disclosure: Storwize was a client a couple of years ago.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

July 2010 Odds and Ends: Perspectives, Tips and Articles

Here are some items that have been added to the main StorageIO website news, tips and articles, video podcast related pages that pertain to a variety of topics ranging from data storage, IO, networking, data centers, virtualization, Green IT, performance, metrics and more.

These content items include various odds and end pieces such as industry or technology commentary, articles, tips, ATEs (See additional ask the expert tips here) or FAQs as well as some video and podcasts for your mid summer (if in the northern hemisphere) enjoyment.

The New Green IT: Productivity, supporting growth, doing more with what you have

Energy efficient and money saving Green IT or storage optimization are often associated to mean things like MAID, Intelligent Power Management (IPM) for servers and storage disk drive spin down or data deduplication. In other words, technologies and techniques to minimize or avoid power consumption as well as subsequent cooling requirements which for some data, applications or environments can be the case. However there is also shifting from energy avoidance to that of being efficient, effective, productive not to mention profitable as forms of optimization. Collectively these various techniques and technologies help address or close the Green Gap and can reduce the amount of Green IT confusion in the form of boosting productivity (same goes for servers or networks) in terms of more work, IOPS, bandwidth, data moved, frames or packets, transactions, videos or email processed per watt per second (or other unit of time).

Click here to read and listen to my comments about boosting IOPs per watt, or here to learn more about the many facets of energy efficient storage and here on different aspects of storage optimization. Want to read more about the next major wave of server, storage, desktop and networking virtualization? Then click here to read more about virtualization life beyond consolidation where the emphasis or focus expands to abstraction, transparency, enablement in addition to consolidation for servers, storage, networks. If you are interested in metrics and measurements, Storage Resource Management (SRM) not to mention discussion about various macro data center metrics including PUE among others, click on the preceding links.

NAS and Shared Storage, iSCSI, DAS, SAS and more

Shifting gears to general industry trends and commentary, here are some comments on consumer and SOHO storage sharing, the role and importance Value Added Resellers (VARs) serve for SMB environments, as well as the top storage technologies that are in use and remain relevant. Here are some comments on iSCSI which continues to gain in popularity as well as storage options for small businesses.

Are you looking to buy or upgrade a new server? Here are some vendor and technology neutral tips to help determine needs along with requirements to help be a more effective informed buyer. Interested or do you want to know more about Serial Attached SCSI (6Gb/s SAS) including for use as external shared direct attached storage (DAS) for Exchange, Sharepoint, Oracle, VMware or HyperV clusters among other usage scenarios, check out this FAQ as well as podcast. Here are some other items including a podcast about using storage partitions in your data storage infrastructure, an ATE about what type of 1.5TB centralized storage to support multiple locations, and a video on scaling with clustered storage.

That is all for now, hope all is well and enjoy the content.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

EMC VPLEX: Virtual Storage Redefined or Respun?

In a flurry of announcements that coincide with EMCworld occurring in Boston this week of May 10 2010 EMC officially unveiled the Virtual Storage vision initiative (aka twitter hash tag of #emcvs) and initial VPLEX product. The Virtual Storage initiative was virtually previewed back in March (See my previous post here along with one from Stu Miniman (twitter @stu) of EMC here or here) and according to EMC the VPLEX product was made generally available (GA) back in April.

The Virtual Storage vision and associated announcements consisted of:

  • Virtual Storage vision – Big picture  initiative view of what and how to enable private clouds
  • VPLEX architecture – Big picture view of federated data storage management and access
  • First VPLEX based product – Local and campus (Metro to about 100km) solutions
  • Glimpses of how the architecture will evolve with future products and enhancements


Figure 1: EMC Virtual Storage and Virtual Server Vision and Big Pictures

The Big Picture
The EMC Virtual Storage vision (Figure 1) is the foundation of a private IT cloud which should enable characteristics including transparency, agility, flexibility, efficient, always on, resiliency, security, on demand and scalable. Think of it this way, EMC wants to enable and facilitate for storage what is being done by server virtualization hypervisor vendors including VMware (which happens to be owned by EMC), Microsoft HyperV and Citrix/Xen among others. That is, break down the physical barriers or constraints around storage similar to how virtual servers release applications and their operating systems from being tied to a physical server.

While the current focus of desktop, server and storage virtualization has been focused on consolidation and cost avoidance, the next big wave or phase is life beyond consolidation where the emphasis expands to agility, flexibility, ease of use, transparency, and portability (Figure 2). In the next phase which puts an emphasis around enablement and doing more with what you have while enhancing business agility focus extends from how much can be consolidated or the number of virtual machines per physical machine to that of using virtualization for flexibility, transparency (read more here and here or watch here).


Figure 2: Virtual Storage Big Picture

That same trend will be happening with storage where the emphasis also expands from how much data can be squeezed or consolidated onto a given device to that of enabling flexibility and agility for load balancing, BC/DR, technology upgrades, maintenance and other routine Infrastructure Resource Management (IRM) tasks.

For EMC, achieving this vision (both directly for storage, and indirectly for servers via their VMware subsidiary) is via local and distributed (metro and wide area) federation management of physical resources to support virtual data center operations. EMC building blocks for delivering this vision including VPLEX, data and storage management federation across EMC and third party products, FAST (fully automated storage tiering), SSD, data protection and data footprint reduction and data protection management products among others.

Buzzword bingo aside (e.g. LAN, SAN, MAN, WAN, Pots and Pans) along with Automation, DWDM, Asynchronous, BC, BE or Back End, Cache coherency, Cache consistency, Chargeback, Cluster, db loss, DCB, Director, Distributed, DLM or Distributed Lock Management, DR, Foe or Fibre Channel over Ethernet, FE or Front End, Federated, FAST, Fibre Channel, Grid, HyperV, Hypervisor, IRM or Infrastructure Resource Management, I/O redirection, I/O shipping, Latency, Look aside, Metadata, Metrics, Public/Private Cloud, Read ahead, Replication, SAS, Shipping off to Boston, SRA, SRM, SSD, Stale Reads, Storage virtualization, Synchronization, Synchronous, Tiering, Virtual storage, VMware and Write through among many other possible candidates the big picture here is about enabling flexibility, agility, ease of deployment and management along with boosting resource usage effectiveness and presumably productivity on a local, metro and future global basis.


Figure 3: EMC Storage Federation and Enabling Technology Big Picture

The VPLEX Big Picture
Some of the tenants of the VPLEX architecture (Figure 3) include a scale out cluster or grid design for local and distributed (metro and wide area) access where you can start small and evolve as needed in a predictable and deterministic manner.


Figure 4: Generic Virtual Storage (Local SAN and MAN/WAN) and where VPLEX fits

The VPLEX architecture is targeted towards enabling next generation data centers including private clouds where ease and transparency of data movement, access and agility are essential. VPLEX sits atop existing EMC and third party storage as a virtualization layer between physical or virtual servers and in theory, other storage systems that rely on underlying block storage. For example in theory a NAS (NFS, CIFS, and AFS) gateway, CAS content archiving or Object based storage system or purpose specific database machine could sit between actual application servers and VPLEX enabling multiple layers of flexibility and agility for larger environments.

At the heart of the architecture is an engine running a highly distributed data caching algorithm that uses an approach where a minimal amount of data is sent to other nodes or members in the VPLEX environment to reduce overhead and latency (in theory boosting performance). For data consistency and integrity, a distributed cache coherency model is employed to protect against stale reads and writes along with load balancing, resource sharing and failover for high availability. A VPLEX environment consists of a federated management view across multiple VPLEX clusters including the ability to create a stretch volume that is accessible across multiple VPLEX clusters (Figure 5).


Figure 5: EMC VPLEX Big Picture


Figure 6: EMC VPLEX Local with 1 to 4 Engines

Each VPLEX local cluster (Figure 6) is made up of 1 to 4 engines (Figure 7) per rack with each engine consisting of two directors each having 64GByte of cache, localized compute Intel processors, 16 Front End (FE) and 16 Back End (BE) Fibre Channel ports configured in a high availability (HA). Communications between the directors and engines is Fibre Channel based. Meta data is moved between the directors and engines in 4K blocks to maintain consistency and coherency. Components are fully redundant and include phone home support.


Figure 7: EMC VPLEX Engine with redundant directors

VPLEX initially host servers supported include VMware, Cisco UCS, Windows, Solaris, IBM AIX, HPUX and Linux along with EMC PowerPath and Windows multipath management drivers. Local server clusters supported include Symantec VCS, Microsoft MSCS and Oracle RAC along with various volume mangers. SAN fabric connectivity supported includes Brocade and Cisco as well as Legacy McData based products.

VPLEX also supports cache (Figure 8 ) write thru to preserve underlying array based functionality and performance with 8,000 total virtualized LUNs per system. Note that underlying LUNs can be aggregated or simply passed through the VPLEX. Storage that attaches to the BE Fibre Channel ports include EMC Symmetrix VMAX and DMX along with CLARiiON CX and CX4. Third party storage supported includes HDS9000 and USPV/VM along with IBM DS8000 and others to be added as they are certified. In theory given that the VPLEX presents block based storage to hosts; one would also expect that NAS, CAS or other object based gateways and servers that rely on underlying block storage to also be supported in the future.


Figure 8: VPLEX Architecture and Distributed Cache Overview

Functionality that can be performed between the cluster nodes and engines with VPLEX include data migration and workload movement across different physical storage systems or sites along with shared access with read caching on a local and distributed basis. LUNS can also be pooled across different vendors underlying storage solutions that also retain their native feature functionality via VPLEX write thru caching.

Reads from various servers can be resolved by any node or engine that checks their cache tables (Figure 8 ) to determine where to resolve the actual I/O operation from. Data integrity checks are also maintained to prevent stale reads or write operations from occurring. Actual meta data communications between nodes is very small to enable state fullness while reducing overhead and maximizing performance. When a change to cache data occurs, meta information is sent to other nodes to maintain the distributed cache management index schema. Note that only pointers to where data and fresh cache entries reside are what is stored and communicated in the meta data via the distributed caching algorithm.


Figure 9: EMC VPLEX Metro Today

For metro deployments, two clusters (Figure 9) are utilized with distances supported up to about 100km or about 5ms of latency in a synchronous manner utilizing long distance Fibre Channel optics and transceivers including Dense Wave Division Multiplexing (DWDM) technologies (See Chapter 6: Metropolitan and Wide Area Storage Networking in Resilient Storage Networking (Elsevier) for additional details on LAN, MAN and WAN topics).

Initially EMC is supporting local or Metro including Campus based VPLEX deployments requiring synchronous communications however asynchronous (WAN) Geo and Global based solutions are planned for the future (Figure 10).


Figure 10: EMC VPLEX Future Wide Area and Global

Online Workload Migration across Systems and Sites
Online workload or data movement and migration across storage systems or sites is not new with solutions available from different vendors including Brocade, Cisco, Datacore, EMC, Fujitsu, HDS, HP, IBM, LSI and NetApp among others.

For synchronization and data mobility operations such as a VMware Vmotion or Microsoft HyperV Live migration over distance, information is written to separate LUNs in different locations across what are known as stretch volumes to enable non disruptive workload relocation across different storage systems (arrays) from various vendors. Once synchronization is completed, the original source can be disconnected or taken offline for maintenance or other common IRM tasks. Note that at least two LUNs are required, or put another way, for every stretch volume, two LUNs are subtracted from the total number of available LUNs similar to how RAID 1 mirroring requires at least two disk drives.

Unlike other approaches that for coherency and performance rely on either no cached data, or, extensive amounts of cached data along with subsequent overhead for maintaining state fullness (consistency and coherency) including avoiding stale reads or writes, VPLEX relies on a combination of distributed cache lookup tables along with pass thru access to underlying storage when or where needed. Consequently large amounts of data does not need to be cached as well as shipped between VPLEX devices to maintain data consistency, coherency or performance which should also help to keep costs affordable.

Approach is not unique, it is the implementation
Some storage virtualization solutions that have been software based running on an appliance or network switch as well as hardware system based have had a focus of emulating or providing competing capabilities with those of mid to high end storage systems. The premise has been to use lower cost, less feature enabled storage systems aggregated behind the appliance, switch or hardware based system to provide advanced data and storage management capabilities found in traditional higher end storage products.

VPLEX while like any tool or technology could be and probably will be made to do other things than what it is intended for is really focused on, flexibility, transparency and agility as opposed to being used as a means of replacing underlying storage system functionality. What this means is that while there is data movement and migration capabilities including ability to synchronize data across sites or locations, VPLEX by itself is not a replacement for the underlying functionality present in both EMC and third party (e.g. HDS, HP, IBM, NetApp, Oracle/Sun or others) storage systems.

This will make for some interesting discussions, debates and applies to oranges comparisons in particular with those vendors whose products are focused around replacing or providing functionality not found in underlying storage system products.

In a nut shell summary, VPLEX and the Virtual Storage story (vision) is about enabling agility, resiliency, flexibility, data and resource mobility to simply IT Infrastructure Resource Management (IRM). One of the key themes of global storage federation is anywhere access on a local, metro, wide area and global basis across both EMC and heterogeneous third party vendor hardware.

Lets Put it Together: When and Where to use a VPLEX
While many storage virtualization solutions are focused around consolidation or pooling, similar to first wave server and desktop virtualization, the next general broad wave of virtualization is life beyond consolidation. That means expanding the focus of virtualization from consolidation, pooling or LUN aggregation to that of enabling transparency for agility, flexibility, data or system movement, technology refresh and other common time consuming IRM tasks.

Some applications or usage scenarios in the future should include in addition to VMware Vmotion, Microsoft HypverV and Microsoft Clustering along with other host server closuring solutions.


Figure 11: EMC VPLEX Usage Scenarios

Thoughts and Industry Trends Perspectives:

The following are various thoughts, comments, perspectives and questions pertaining to this and storage, virtualization and IT in general.

Is this truly unique as is being claimed?

Interestingly, the message Im hearing out of EMC is not the claim that this is unique, revolutionary or the industries first as is so often the case by vendors, rather that it is their implementation and ability to deploy on a broad perspective basis that is unique. Now granted you will probably hear as is often the case with any vendor or fan boy/fan girl spins of it being unique and Im sure this will also serve up plenty of fodder for mudslinging in the blogsphere, YouTube galleries, twitter land and beyond.

What is the DejaVu factor here?

For some it will be nonexistent, yet for others there is certainly a DejaVu depending on your experience or what you have seen and heard in the past. In some ways this is the manifestation of many vision and initiatives from the late 90s and early 2000s when storage virtualization or virtual storage in an open context jumped into the limelight coinciding with SAN activity. There have been products rolled out along with proof of concept technology demonstrators, some of which are still in the market, others including companies have fallen by the way side for a variety of reasons.

Consequently if you were part of or read or listened to any of the discussions and initiatives from Brocade (Rhapsody), Cisco (SVC, VxVM and others), INRANGE (Tempest) or its successor CNT UMD not to mention IBM SVC, StorAge (now LSI), Incipient (now part of Texas Memory) or Troika among others you should have some DejaVu.

I guess that also begs the question of what is VPLEX, in band, out of band or hybrid fast path control path? From what I have seen it appears to be a fast path approach combined with distributed caching as opposed to a cache centric inband approaches such as IBM SVC (either on a server or as was tried on the Cisco special service blade) among others.

Likewise if you are familiar with IBM Mainframe GDPS or even EMC GDDR as well as OpenVMS Local and Metro clusters with distributed lock management you should also have DejaVu. Similarly if you had looked at or are familiar with any of the YottaYotta products or presentations, this should also be familiar as EMC acquired the assets of that now defunct company.

Is this a way for EMC to sell more hardware along with software products?

By removing barriers enabling IT staffs to support more data on more storage in a denser and more agile footprint the answer should be yes, something that we may see other vendors emulate, or, make noise about what they can or have been doing already.

How is this virtual storage spin different from the storage virtualization story?

That all depends on your view or definition as well as belief systems and preferences for what is or what is not virtual storage vs. storage virtualization. For some who believe that storage virtualization is only virtualization if and only if it involves software running on some hardware appliance or vendors storage system for aggregation and common functionality than you probably wont see this as virtual storage let alone storage virtualization. However for others, it will be confusing hence EMC introducing terms such as federation and avoiding terms including grid to minimize confusion yet play off of cloud crowd commotion.

Is VPLEX a replacement for storage system based tiering and replication?

I do not believe so and even though some vendors are making claims that tiered storage is dead, just like some vendors declared a couple of years ago that disk drives were going to be dead this year at the hands of SSD, neither has come to life so to speak pun intended. What this means for VPLEX is that it leverages underlying automated or manual tiering found in storage systems such as EMC FAST enabled or similar policy and manual functions in third party products.

What VPLEX brings to the table is the ability to transparently present a LUN or volume locally or over distance with shared access while maintaining cache and data coherency. This means that if a LUN or volume moves the applications or file system or volume managers expecting to access that storage will not be surprised, panic or encounter failover problems. Of course there will be plenty of details to be dug into and seen how it all actually works as is the case with any new technology.

Who is this for?

I see this as for environments that need flexibility and agility across multiple storage systems either from one or multiple vendors on a local or metro or wide area basis. This is for those environments that need ability to move workloads, applications and data between different storage systems and sites for maintenance, upgrades, technology refresh, BC/DR, load balancing or other IRM functions similar to how they would use virtual server migration such as VMotion or Live migration among others.

Do VPLEX and Virtual Storage eliminate need for Storage System functionality?

I see some storage virtualization solutions or appliances that have a focus of replacing underlying storage system functionality instead of coexisting or complementing. A way to test for this approach is to listen or read if the vendor or provider says anything along the lines of eliminating vendor lock in or control of the underlying storage system. That can be a sign of the golden rule of virtualization of whoever controls the virtualization functionality (at the server hypervisor or storage) controls the gold! This is why on the server side of things we are starting to see tiered hypervisors similar to tiered servers and storage where mixed hypervisors are being used for different purposes. Will we see tiered storage hypervisors or virtual storage solutions the answer could be perhaps or it depends.

Was Invista a failure not going into production and this a second attempt at virtualization?

There is a popular myth in the industry that Invista never saw the light of day outside of trade show expo or other demos however the reality is that there are actual customer deployments. Invista unlike other storage virtualization products had a different focus which was that around enabling agility and flexibility for common IRM tasks, similar the expanded focus of VPLEX. Consequently Invista has often been in apples to oranges comparison with other virtualization appliances that have as focus pooling along with other functions or in some cases serving as an appliance based storage system.

The focus around Invista and usage by those customers who have deployed it that I have talked with is around enabling agility for maintenance, facilitating upgrades, moves or reconfiguration and other common IRM tasks vs using it for pooling of storage for consolidation purposes. Thus I see VPLEX extending on the vision of Invista in a role of complimenting and leveraging underlying storage system functionality instead of trying to replace those capabilities with that of the storage virtualizer.

Is this a replacement for EMC Invista?

According to EMC the answer is no and that customers using Invista (Yes, there are customers that I have actually talked to) will continue to be supported. However I suspect that over time Invista will either become a low end entry for VPLEX, or, an entry level VPLEX solution will appear sometime in the future.

How does this stack up or compare with what others are doing?

If you are looking to compare to cache centric platforms such as IBMs SVC that adds extensive functionality and capabilities within the storage virtualization framework this is an apples to oranges comparison. VPLEX is providing cache pointers on a local and global basis functioning in a compliment to underlying storage system model where SVC caches at the specific cluster basis and enhancing functionality of underlying storage system. Rest assured there will be other apples to oranges comparisons made between these platforms.

How will this be priced?

When I asked EMC about pricing, they would not commit to a specific price prior to the announcement other than indicating that there will be options for on demand or consumption (e.g. cloud pricing) as well as pricing per engine capacity as well as subscription models (pay as you go).

What is the overhead of VPLEX?

While EMC runs various workload simulations (including benchmarks) internally as well as some publicly (e.g. Microsoft ESRP among others) they have been opposed to some storage simulation benchmarks such as SPC. The EMC opposition to simulations such as SPC have been varied however this could be a good and interesting opportunity for them to silence the industry (including myself) who continue ask them (along with a couple of other vendors including IBM and their XIV) when they will release public results.

What the interesting opportunity I think is for EMC is that they do not even have to benchmark one of their own storage systems such as a CLARiiON or VMAX, instead simply show the performance of some third party product that already is tested on the SPC website and then a submission with that product running attached to a VPLEX.

If the performance or low latency forecasts are as good as they have been described, EMC can accomplish a couple of things by:

  • Demonstrating the low latency and minimal to no overhead of VPLEX
  • Show VPLEX with a third party product comparing latency before and after
  • Provide a comparison to other virtualization platforms including IBM SVC

As for EMC submitting a VMAX or CLARiiON SPC test in general, Im not going to hold my breath for that, instead, will continue to look at the other public workload tests such as ESRP.

Additional related reading material and links:

Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier)
Chapter 3: Networking Your Storage
Chapter 4: Storage and IO Networking
Chapter 6: Metropolitan and Wide Area Storage Networking
Chapter 11: Storage Management
Chapter 16: Metropolitan and Wide Area Examples

The Green and Virtual Data Center (CRC)
Chapter 3: (see also here) What Defines a Next-Generation and Virtual Data Center
Chapter 4: IT Infrastructure Resource Management (IRM)
Chapter 5: Measurement, Metrics, and Management of IT Resources
Chapter 7: Server: Physical, Virtual, and Software
Chapter 9: Networking with your Servers and Storage

Also see these:

Virtual Storage and Social Media: What did EMC not Announce?
Server and Storage Virtualization – Life beyond Consolidation
Should Everything Be Virtualized?
Was today the proverbial day that he!! Froze over?
Moving Beyond the Benchmark Brouhaha

Closing comments (For now):
As with any new vision, initiative, architecture and initial product there will be plenty of questions to ask, items to investigate, early adopter customers or users to talk with and determine what is real, what is future, what is usable and practical along with what is nice to have. Likewise there will be plenty of mud ball throwing and slinging between competitors, fans and foes which for those who enjoy watching or reading those you should be well entertained.

In general, the EMC vision and story builds on and presumably delivers on past industry hype, buzz and vision with solutions that can be put into environments as productivity tool that works for the customer, instead of the customer working for the tool.

Remember the golden rule of virtualization which is in play here is that whoever controls the virtualization or associated management controls the gold. Likewise keep in mind that aggregation can cause aggravation. So do not be scared, however look before you leap meaning do your homework and due diligence with appropriate levels of expectations, aligning applicable technology to the task at hand.

Also, if you have seen or experienced something in the past, you are more likely to have DejaVu as opposed to seeing things as revolutionary. However it is also important to leverage lessons learned for future success. YottaYotta was a lot of NaddaNadda, lets see if EMC can leverage their past experiences to make this a LottaLotta.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Happy Earth Day 2010!

Here in the northern hemisphere it is late April and thus mid spring time.

That means the trees sprouting their buds, leaves and flowering while other plants and things come to life.

In Minnesota where I live, there is not a cloud in the sky today, the sun is out and its going to be another warm day in the 60s, a nice day to not be flying or traveling and thus enjoy the fine weather.

Among other things of note on this earth day 2010 include:

  • Minnesota Twins new home Target Field was just named the most Green Major League Baseball (MLB) stadium as well as greenest in the US with its LEED (or see here) certification.
  • Icelands Eyjafjallajokull volcano continues to spew water vapor steam, CO2 and ash at a slower rate than last week when it first erupted with some speculating that there could be impending activity from other Icelandic volcanos. Some estimates placed the initial eruption CO2 impact and subsequent flight cancellations to be neutral, essentially canceling each other out, however Im sure we will be hearing many different stories in the weeks to come.

  • Image of Iceland Eyjafjallajokull Volcano Eruption via Boston.com

  • Flights to/from and within Europe and the UK are returning to normal
  • Toyota continues to deal with recalls on some of their US built automobiles including the energy efficient Prius, some of which may have been purchased during the recent US cash for clunkers (CFC) program (hmm, is that ironic or what?)
  • Greenpeace in addition to using a Facebook page to protest Facebook data center practices is now targeting cloud IT in general including just before the Apple iPad launch (Heres some comments from Microsoft).
  • Vendors in all industries are lining up for the second coming of Green marketing or perhaps Green Washing 2.0

The new Green IT, moving beyond Green wash and hype

Speaking of Green IT including Green Computing, Green Storage, Virtualization, Cloud, Federation and more, here is a link to a post that I did back in February discussing how the Green Gap continues to exist.

The green gap exists and centers around the confusion of what Green means along with the common disconnects between core IT issues or barriers to becoming more efficient, effective, flexible and optimized from both an economic as well as environmental basis to those commonly messaged to under the green umbrella (read more here).

Regardless of where you stand on Green, Green washing, Green hype, environmentalism, eco-tech and other related themes, for at least a moment, set aside the politics and science debates and think in terms of practicality and economics.

That is, look for simple, recurring things that can be done to stretch your dollar or spending ability in order to support demand (See figure below) in a more effective manner along with reducing waste. For example to meet growing demand requirements in the face of shrinking or stagnate budgets, the action is to stretch available resources to do more work when needed, or retain more where applicable with the same or less footprint. What this means is that while common messaging is around reducing costs, look at the inverse which is to do more with available budgets or resources. The result is green in terms of economic and environmental benefits.

IT Resource demand
Increasing IT Resource Demand

Green IT wheel of oppourtunity
Green IT enablement techniques and technologies

Look at and understand the broader aspects of being green which has both economical and environmental benefits without compromising on productivity or functionality. There are many aspects or facets of being green beyond those commonly discussed or perceived to be so (See Green IT enablement techniques and technologies figure above).

Certainly recycling of paper, water, aluminum, plastics and other items including technology equipment are important to reduce waste and are things to consider. Another aspect of reducing waste particularly in IT is to avoid rework that can range from finding network bottlenecks or problems that result in continuous retransmission of data for failed backup, replication or data transfers that cause lost opportunity or resource consumption. Likewise programming errors (bugs) or miss configuration that results in rework or lost productivity also are forms of waste among others.

Another theme is that of shifting from energy avoidance to energy efficiency and effectiveness which are often thought to the same. However the expanded focus is also about getting more work done when needed with the same or less resources (See figure below) for example increasing activity (IOPS, transactions, emails or video served, bandwidth or messages) per watt of energy consumed.

From energy avoidence to effectiveness
Shifting from energy avoidance to effectiveness

One of the many techniques and approaches for addressing energy including stretching resources and being green include intelligent power management (IPM). With IPM, the focus is not strictly centered around energy avoidance, instead about inteligently adapting to different workloads or activity balancing performance and energy. Thus when there is work to be done, get the work done quickly with as little energy as possible (IOP or activity per watt), when there is less work, provide lower performance and thus smaller energy requirements, or when no work to be done, going into additional energy saving modes. Thus power management does not have to be exclusively about turrning off the lights or IT equipment in order to be green.

The following two figures look at Green IT past, present and future with an expanding focus around optimization and effectiveness meaning getting more work done, storing more data for longer periods of time, meeting growth demands with what appears to be additional resources however at a lower per unit cost without compromising on performance, availability or economics.

Green IT wheel of oppourtunity
Green IT: Past, present and future shift from avoidance to efficiency and effectiveness

Green IT wheel of oppourtunity
The new Green IT: Boosting business effectiveness, maximize ROI while helping the environment

If you think about going green as simply doing or using things more effectively, reducing waste, working more intelligently or effectively the benefits are both economical and environmentally positive (See the two figures above).

Instead of finding ways to fund green initiatives, shift the focus to how you can enable enhanced productivity, stretching resources further, doing more in the same or smaller footprint (floor space, power, cooling, energy, personal, licensing, budgets) for business economic and environmental sustainability with the result being environmental encampments.

Also keep in mind that small percentage changes on a large or recurring basis have significant benefits. For example a small change in cooling temperatures while staying within vendor guideline recommendations can result in big savings for large environments.

 

Bottom line

If you are a business and discounting green as simply a fad, or perhaps as a public relations (PR) initiative or activity tied to reducing carbon footprints and recycling then you are missing out on economic (top and bottom line) enhancement opportunities.

Likewise if you think that going green is only about the environment, then there is a missed opportunity to boost economic opportunities to help fund those inititiaves.

Going green means many different things to various people and is often more broad and common sense based than most realize.

That is all for now, happy earth day 2010

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Spring 2010 StorageIO Newsletter

Welcome to the spring 2010 edition of the Server and StorageIO (StorageIO) news letter.

This edition follows the inaugural issue (Winter 2010) incorporating feedback and suggestions as well as building on the fantastic responses received from recipients.

A couple of enhancements included in this issue (marked as New!) include a Featured Related Site along with Some Interesting Industry Links. Another enhancement based on feedback is to include additional comment that in upcoming issues will expand to include a column article along with industry trends and perspectives.

StorageIO News Letter Image
Spring 2010 Newsletter

You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the spring 2010 newsletter as HTML or PDF or, to go to the newsletter page.

Follow via Goggle Feedburner here or via email subscription here.

You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Also, a very big thank you to everyone who has helped make StorageIO a success!.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Its US Census time, What about IT Data Centers?

It is that once a decade activity time this year referred to as the US 2010 Census.

With the 2010 census underway, not to mention also time for completing and submitting your income tax returns, if you are in IT, what about measuring, assessing, taking inventory or analyzing your data and data center resources?

US 2010 Cenus formsUS 2010 Cenus forms
Figure 1: IT US 2010 Census forms

Have you recently taken a census of your data, data storage, servers, networks, hardware, software tools, services providers, media, maintenance agreements and licenses not to mention facilities?

Likewise have you figured out what if any taxes in terms of overhead or burden exists in your IT environment or where opportunities to become more optimized and efficient to get an IT resource refund of sorts are possible?

If not, now is a good time to take a census of your IT data center and associated resources in what might also be called an assessment, review, inventory or survey of what you have, how its being used, where and who is using and when along with associated configuration, performance, availability, security, compliance coverage along with costs and energy impact among other items.

IT Data Center Resources
Figure 2: IT Data Center Metrics for Planning and Forecasts

How much storage capacity do you have, how is it allocated along with being used?

What about storage performance, are you meeting response time and QoS objectives?

Lets not forget about availability, that is planned and unplanned downtime, how have your systems been behaving?

From an energy or power and cooling standpoint, what is the consumption along with metrics aligned to productivity and effectiveness. These include IOPS per watt, transactions per watt, videos or email along with web clicks or page views per watt, processor GHz per watt along with data movement bandwidth per watt and capacity stored per watt in a given footprint.

Other items to look into for data centers besides storage include servers, data and I/O networks, hardware, software, tools, services and other supplies along with physical facility with metrics such as PUE. Speaking of optimization, how is your environment doing, that is another advantage of doing a data center census.

For those who have completed and sent in your census material along with your 2009 tax returns, congratulations!

For others in the US who have not done so, now would be a good time to get going on those activities.

Likewise, regardless of what country or region you are in, its always a good time to take a census or inventory of your IT resources instead of waiting every ten years to do so.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

StorageIO in the News Update V2010.1

StorageIO is regularly quoted and interviewed in various industry and vertical market venues and publications both on-line and in print on a global basis.

The following are some coverage, perspectives and commentary by StorageIO on IT industry trends including servers, storage, I/O networking, hardware, software, services, virtualization, cloud, cluster, grid, SSD, data protection, Green IT and more since the last update.

Realizing that some prefer blogs to webs to twitter to other venues, here are some recent links among others to media coverage and comments by me on a different topics that are among others found at www.storageio.com/news.html:

  • SearchSMBStorage: Comments on EMC Iomega v.Clone for PC data syncronization – Jan 2010
  • Computerworld: Comments on leveraging cloud or online backup – Jan 2010
  • ChannelProSMB: Comments on NAS vs SAN Storage for SMBs – Dec 2009
  • ChannelProSMB: Comments on Affordable SMB Storage Solutions – Dec 2009
  • SearchStorage: Comments on What to buy a geek for the holidays, 2009 edition – Dec 2009
  • SearchStorage: Comments on EMC VMAX storage and 8GFC enhancements – Dec 2009
  • SearchStorage: Comments on Data Footprint Reduction – Dec 2009
  • SearchStorage: Comments on Building a private storage cloud – Dec 2009
  • SearchStorage: Comments on SSD in storage systems – Dec 2009
  • SearchStorage: Comments on slow adoption of file virtualization – Dec 2009
  • IT World: Comments on maximizing data security investments – Nov 2009
  • SearchCIO: Comments on storage virtualization for your organisation – Nov 2009
  • Processor: Comments on how to win approval for hardware upgrades – Nov 2009
  • Processor: Comments on the Future of Servers – Nov 2009
  • SearchITChannel: Comments on Energy-efficient technology sales depend on pitch – Nov 2009
  • SearchStorage: Comments on how to get from Fibre Channel to FCoE – Nov 2009
  • Minneapolis Star Tribune: Comments on Google Wave and Clouds – Nov 2009
  • SearchStorage: Comments on EMC and Cisco alliance – Nov 2009
  • SearchStorage: Comments on HP virtualizaiton enhancements – Nov 2009
  • SearchStorage: Comments on Apple canceling ZFS project – Oct 2009
  • Processor: Comments on EPA Energy Star for Server and Storage Ratings – Oct 2009
  • IT World Canada: Cloud computing, dot be scared, look before you leap – Oct 2009
  • IT World: Comments on stretching your data protection and security dollar – Oct 2009
  • Enterprise Storage Forum: Comments about Fragmentation and Performance? – Oct 2009
  • SearchStorage: Comments about data migration – Oct 2009
  • SearchStorage: Comments about What’s inside internal storage clouds? – Oct 2009
  • Enterprise Storage Forum: Comments about T-Mobile and Clouds? – Oct 2009
  • Storage Monkeys: Podcast comments about Sun and Oracle- Sep 2009
  • Enterprise Storage Forum: Comments on Maxiscale clustered, cloud NAS – Sep 2009
  • SearchStorage: Comments on Maxiscale clustered NAS for web hosting – Sep 2009
  • Enterprise Storage Forum: Comments on whos hot in data storage industry – Sep 2009
  • SearchSMBStorage: Comments on SMB Fibre Channel switch options – Sep 2009
  • SearchStorage: Comments on using storage more efficiently – Sep 2009
  • SearchStorage: Comments on Data and Storage Tiering including SSD – Sep 2009
  • Enterprise IT Planet: Comments on Data Deduplication – Sep 2009
  • SearchDataCenter: Comments on Tiered Storage – Sep 2009
  • Enterprise Storage Forum: Comments on Sun-Oracle Wedding – Aug 2009
  • Processor.com: Comments on Storage Network Snags – Aug 2009
  • SearchStorageChannel: Comments on I/O virtualizaiton (IOV) – Aug 2009
  • SearchStorage: Comments on Clustered NAS storage and virtualization – Aug 2009
  • SearchITChannel: Comments on Solid-state drive prices still hinder adoption – Aug 2009
  • Check out the Content, Tips, Tools, Videos, Podcasts plus White Papers, and News pages for additional commentary, coverage and related content or events.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved