Upcoming Event: Industry Trends and Perspective European Seminar

Event Seminar Announcement:

IT Data Center, Storage and Virtualization Industry Trends and Perspective
June 16, 2010 Nijkerk, GELDERLAND Netherlands

Event TypeTraining/Seminar
Event TypeSeminar Training with Greg Schulz of US based Server and StorageIO
SponsorBrouwer Storage Consultancy
Target AudienceStorage Architects, Consultants, Pre-Sales, Customer (technical) decison makers
KeywordsCloud, Grid, Data Protection, Disaster Recovery, Storage, Green IT, VTL, Encryption, Dedupe, SAN, NAS, Backup, BC, DR, Performance, Virtualization, FCoE
Location and VenueAmpt van Nijkerk Berencamperweg
Nijkerk, GELDERLAND NL
WhenWed. June 16, 2010 9AM-5PM Local
Price€ 450,=
Event URLLinkedIn: https://storageioblog.com/book4.html
ContactGert Brouwer
Olevoortseweg 43
3861 MH Nijkerk
The Netherlands
Phone: +31-33-246-6825
Fax: +31-33-245-8956
Cell Phone: +31-652-601-309

info@brouwerconsultancy.com

AbstractGeneral items that will be covered include: What are current and emerging macro trends, issues, challenges and opportunities. Common IT customer and IT trends, issues and challenges. Opportunities for leveraging various current, new and emerging technologies, techniques. What are some new and improved technologies and techniques. The seminar will provide insight on how to address various IT and data storage management challenges, where and how new and emerging technologies can co-exist as well as compliment installed resources for maximum investment protection and business agility. Additional themes include cost and storage resource management, optimization and efficiency approaches along with where and how cloud, virtualizaiton and other topics fit into existing environments.

Buzzwords and topics to be discussed include among others: FC and FCoE, SAS, SATA, iSCSI and NAS, I/O Vritualization (IOV) and convergence SSD (Flash and RAM), RAID, Second Generation MAID and IPM, Tape Performance and Capacity planning, Performance and Capacity Optimization, Metrics IRM tools including DPM, E2E, SRA, SRM, as Well as Federated Management Data movement and migration including automation or policy enabled HA and Data protection including Backup/Restore, BC/DR , Security/Encryption VTL, CDP, Snapshots and replication for virtual and non virtual environments Dynamic IT and Optimization , the new Green IT (efficiency and productivity) Distributed data protection (DDP) and distributed data caching (DDC) Server and Storage Virtualization along with discussion about life beyond consolidation SAN, NAS, Clusters, Grids, Clouds (Public and Private), Bulk and object based Storage Unified and vendor prepackaged stacked solutions (e.g. EMC VCE among others) Data footprint reduction (Servers, Storage, Networks, Data Protection and Hypervisors among others.

Learn about other events involving Greg Schulz and StorageIO at www.storageio.com/events

EMC VPLEX: Virtual Storage Redefined or Respun?

In a flurry of announcements that coincide with EMCworld occurring in Boston this week of May 10 2010 EMC officially unveiled the Virtual Storage vision initiative (aka twitter hash tag of #emcvs) and initial VPLEX product. The Virtual Storage initiative was virtually previewed back in March (See my previous post here along with one from Stu Miniman (twitter @stu) of EMC here or here) and according to EMC the VPLEX product was made generally available (GA) back in April.

The Virtual Storage vision and associated announcements consisted of:

  • Virtual Storage vision – Big picture  initiative view of what and how to enable private clouds
  • VPLEX architecture – Big picture view of federated data storage management and access
  • First VPLEX based product – Local and campus (Metro to about 100km) solutions
  • Glimpses of how the architecture will evolve with future products and enhancements


Figure 1: EMC Virtual Storage and Virtual Server Vision and Big Pictures

The Big Picture
The EMC Virtual Storage vision (Figure 1) is the foundation of a private IT cloud which should enable characteristics including transparency, agility, flexibility, efficient, always on, resiliency, security, on demand and scalable. Think of it this way, EMC wants to enable and facilitate for storage what is being done by server virtualization hypervisor vendors including VMware (which happens to be owned by EMC), Microsoft HyperV and Citrix/Xen among others. That is, break down the physical barriers or constraints around storage similar to how virtual servers release applications and their operating systems from being tied to a physical server.

While the current focus of desktop, server and storage virtualization has been focused on consolidation and cost avoidance, the next big wave or phase is life beyond consolidation where the emphasis expands to agility, flexibility, ease of use, transparency, and portability (Figure 2). In the next phase which puts an emphasis around enablement and doing more with what you have while enhancing business agility focus extends from how much can be consolidated or the number of virtual machines per physical machine to that of using virtualization for flexibility, transparency (read more here and here or watch here).


Figure 2: Virtual Storage Big Picture

That same trend will be happening with storage where the emphasis also expands from how much data can be squeezed or consolidated onto a given device to that of enabling flexibility and agility for load balancing, BC/DR, technology upgrades, maintenance and other routine Infrastructure Resource Management (IRM) tasks.

For EMC, achieving this vision (both directly for storage, and indirectly for servers via their VMware subsidiary) is via local and distributed (metro and wide area) federation management of physical resources to support virtual data center operations. EMC building blocks for delivering this vision including VPLEX, data and storage management federation across EMC and third party products, FAST (fully automated storage tiering), SSD, data protection and data footprint reduction and data protection management products among others.

Buzzword bingo aside (e.g. LAN, SAN, MAN, WAN, Pots and Pans) along with Automation, DWDM, Asynchronous, BC, BE or Back End, Cache coherency, Cache consistency, Chargeback, Cluster, db loss, DCB, Director, Distributed, DLM or Distributed Lock Management, DR, Foe or Fibre Channel over Ethernet, FE or Front End, Federated, FAST, Fibre Channel, Grid, HyperV, Hypervisor, IRM or Infrastructure Resource Management, I/O redirection, I/O shipping, Latency, Look aside, Metadata, Metrics, Public/Private Cloud, Read ahead, Replication, SAS, Shipping off to Boston, SRA, SRM, SSD, Stale Reads, Storage virtualization, Synchronization, Synchronous, Tiering, Virtual storage, VMware and Write through among many other possible candidates the big picture here is about enabling flexibility, agility, ease of deployment and management along with boosting resource usage effectiveness and presumably productivity on a local, metro and future global basis.


Figure 3: EMC Storage Federation and Enabling Technology Big Picture

The VPLEX Big Picture
Some of the tenants of the VPLEX architecture (Figure 3) include a scale out cluster or grid design for local and distributed (metro and wide area) access where you can start small and evolve as needed in a predictable and deterministic manner.


Figure 4: Generic Virtual Storage (Local SAN and MAN/WAN) and where VPLEX fits

The VPLEX architecture is targeted towards enabling next generation data centers including private clouds where ease and transparency of data movement, access and agility are essential. VPLEX sits atop existing EMC and third party storage as a virtualization layer between physical or virtual servers and in theory, other storage systems that rely on underlying block storage. For example in theory a NAS (NFS, CIFS, and AFS) gateway, CAS content archiving or Object based storage system or purpose specific database machine could sit between actual application servers and VPLEX enabling multiple layers of flexibility and agility for larger environments.

At the heart of the architecture is an engine running a highly distributed data caching algorithm that uses an approach where a minimal amount of data is sent to other nodes or members in the VPLEX environment to reduce overhead and latency (in theory boosting performance). For data consistency and integrity, a distributed cache coherency model is employed to protect against stale reads and writes along with load balancing, resource sharing and failover for high availability. A VPLEX environment consists of a federated management view across multiple VPLEX clusters including the ability to create a stretch volume that is accessible across multiple VPLEX clusters (Figure 5).


Figure 5: EMC VPLEX Big Picture


Figure 6: EMC VPLEX Local with 1 to 4 Engines

Each VPLEX local cluster (Figure 6) is made up of 1 to 4 engines (Figure 7) per rack with each engine consisting of two directors each having 64GByte of cache, localized compute Intel processors, 16 Front End (FE) and 16 Back End (BE) Fibre Channel ports configured in a high availability (HA). Communications between the directors and engines is Fibre Channel based. Meta data is moved between the directors and engines in 4K blocks to maintain consistency and coherency. Components are fully redundant and include phone home support.


Figure 7: EMC VPLEX Engine with redundant directors

VPLEX initially host servers supported include VMware, Cisco UCS, Windows, Solaris, IBM AIX, HPUX and Linux along with EMC PowerPath and Windows multipath management drivers. Local server clusters supported include Symantec VCS, Microsoft MSCS and Oracle RAC along with various volume mangers. SAN fabric connectivity supported includes Brocade and Cisco as well as Legacy McData based products.

VPLEX also supports cache (Figure 8 ) write thru to preserve underlying array based functionality and performance with 8,000 total virtualized LUNs per system. Note that underlying LUNs can be aggregated or simply passed through the VPLEX. Storage that attaches to the BE Fibre Channel ports include EMC Symmetrix VMAX and DMX along with CLARiiON CX and CX4. Third party storage supported includes HDS9000 and USPV/VM along with IBM DS8000 and others to be added as they are certified. In theory given that the VPLEX presents block based storage to hosts; one would also expect that NAS, CAS or other object based gateways and servers that rely on underlying block storage to also be supported in the future.


Figure 8: VPLEX Architecture and Distributed Cache Overview

Functionality that can be performed between the cluster nodes and engines with VPLEX include data migration and workload movement across different physical storage systems or sites along with shared access with read caching on a local and distributed basis. LUNS can also be pooled across different vendors underlying storage solutions that also retain their native feature functionality via VPLEX write thru caching.

Reads from various servers can be resolved by any node or engine that checks their cache tables (Figure 8 ) to determine where to resolve the actual I/O operation from. Data integrity checks are also maintained to prevent stale reads or write operations from occurring. Actual meta data communications between nodes is very small to enable state fullness while reducing overhead and maximizing performance. When a change to cache data occurs, meta information is sent to other nodes to maintain the distributed cache management index schema. Note that only pointers to where data and fresh cache entries reside are what is stored and communicated in the meta data via the distributed caching algorithm.


Figure 9: EMC VPLEX Metro Today

For metro deployments, two clusters (Figure 9) are utilized with distances supported up to about 100km or about 5ms of latency in a synchronous manner utilizing long distance Fibre Channel optics and transceivers including Dense Wave Division Multiplexing (DWDM) technologies (See Chapter 6: Metropolitan and Wide Area Storage Networking in Resilient Storage Networking (Elsevier) for additional details on LAN, MAN and WAN topics).

Initially EMC is supporting local or Metro including Campus based VPLEX deployments requiring synchronous communications however asynchronous (WAN) Geo and Global based solutions are planned for the future (Figure 10).


Figure 10: EMC VPLEX Future Wide Area and Global

Online Workload Migration across Systems and Sites
Online workload or data movement and migration across storage systems or sites is not new with solutions available from different vendors including Brocade, Cisco, Datacore, EMC, Fujitsu, HDS, HP, IBM, LSI and NetApp among others.

For synchronization and data mobility operations such as a VMware Vmotion or Microsoft HyperV Live migration over distance, information is written to separate LUNs in different locations across what are known as stretch volumes to enable non disruptive workload relocation across different storage systems (arrays) from various vendors. Once synchronization is completed, the original source can be disconnected or taken offline for maintenance or other common IRM tasks. Note that at least two LUNs are required, or put another way, for every stretch volume, two LUNs are subtracted from the total number of available LUNs similar to how RAID 1 mirroring requires at least two disk drives.

Unlike other approaches that for coherency and performance rely on either no cached data, or, extensive amounts of cached data along with subsequent overhead for maintaining state fullness (consistency and coherency) including avoiding stale reads or writes, VPLEX relies on a combination of distributed cache lookup tables along with pass thru access to underlying storage when or where needed. Consequently large amounts of data does not need to be cached as well as shipped between VPLEX devices to maintain data consistency, coherency or performance which should also help to keep costs affordable.

Approach is not unique, it is the implementation
Some storage virtualization solutions that have been software based running on an appliance or network switch as well as hardware system based have had a focus of emulating or providing competing capabilities with those of mid to high end storage systems. The premise has been to use lower cost, less feature enabled storage systems aggregated behind the appliance, switch or hardware based system to provide advanced data and storage management capabilities found in traditional higher end storage products.

VPLEX while like any tool or technology could be and probably will be made to do other things than what it is intended for is really focused on, flexibility, transparency and agility as opposed to being used as a means of replacing underlying storage system functionality. What this means is that while there is data movement and migration capabilities including ability to synchronize data across sites or locations, VPLEX by itself is not a replacement for the underlying functionality present in both EMC and third party (e.g. HDS, HP, IBM, NetApp, Oracle/Sun or others) storage systems.

This will make for some interesting discussions, debates and applies to oranges comparisons in particular with those vendors whose products are focused around replacing or providing functionality not found in underlying storage system products.

In a nut shell summary, VPLEX and the Virtual Storage story (vision) is about enabling agility, resiliency, flexibility, data and resource mobility to simply IT Infrastructure Resource Management (IRM). One of the key themes of global storage federation is anywhere access on a local, metro, wide area and global basis across both EMC and heterogeneous third party vendor hardware.

Lets Put it Together: When and Where to use a VPLEX
While many storage virtualization solutions are focused around consolidation or pooling, similar to first wave server and desktop virtualization, the next general broad wave of virtualization is life beyond consolidation. That means expanding the focus of virtualization from consolidation, pooling or LUN aggregation to that of enabling transparency for agility, flexibility, data or system movement, technology refresh and other common time consuming IRM tasks.

Some applications or usage scenarios in the future should include in addition to VMware Vmotion, Microsoft HypverV and Microsoft Clustering along with other host server closuring solutions.


Figure 11: EMC VPLEX Usage Scenarios

Thoughts and Industry Trends Perspectives:

The following are various thoughts, comments, perspectives and questions pertaining to this and storage, virtualization and IT in general.

Is this truly unique as is being claimed?

Interestingly, the message Im hearing out of EMC is not the claim that this is unique, revolutionary or the industries first as is so often the case by vendors, rather that it is their implementation and ability to deploy on a broad perspective basis that is unique. Now granted you will probably hear as is often the case with any vendor or fan boy/fan girl spins of it being unique and Im sure this will also serve up plenty of fodder for mudslinging in the blogsphere, YouTube galleries, twitter land and beyond.

What is the DejaVu factor here?

For some it will be nonexistent, yet for others there is certainly a DejaVu depending on your experience or what you have seen and heard in the past. In some ways this is the manifestation of many vision and initiatives from the late 90s and early 2000s when storage virtualization or virtual storage in an open context jumped into the limelight coinciding with SAN activity. There have been products rolled out along with proof of concept technology demonstrators, some of which are still in the market, others including companies have fallen by the way side for a variety of reasons.

Consequently if you were part of or read or listened to any of the discussions and initiatives from Brocade (Rhapsody), Cisco (SVC, VxVM and others), INRANGE (Tempest) or its successor CNT UMD not to mention IBM SVC, StorAge (now LSI), Incipient (now part of Texas Memory) or Troika among others you should have some DejaVu.

I guess that also begs the question of what is VPLEX, in band, out of band or hybrid fast path control path? From what I have seen it appears to be a fast path approach combined with distributed caching as opposed to a cache centric inband approaches such as IBM SVC (either on a server or as was tried on the Cisco special service blade) among others.

Likewise if you are familiar with IBM Mainframe GDPS or even EMC GDDR as well as OpenVMS Local and Metro clusters with distributed lock management you should also have DejaVu. Similarly if you had looked at or are familiar with any of the YottaYotta products or presentations, this should also be familiar as EMC acquired the assets of that now defunct company.

Is this a way for EMC to sell more hardware along with software products?

By removing barriers enabling IT staffs to support more data on more storage in a denser and more agile footprint the answer should be yes, something that we may see other vendors emulate, or, make noise about what they can or have been doing already.

How is this virtual storage spin different from the storage virtualization story?

That all depends on your view or definition as well as belief systems and preferences for what is or what is not virtual storage vs. storage virtualization. For some who believe that storage virtualization is only virtualization if and only if it involves software running on some hardware appliance or vendors storage system for aggregation and common functionality than you probably wont see this as virtual storage let alone storage virtualization. However for others, it will be confusing hence EMC introducing terms such as federation and avoiding terms including grid to minimize confusion yet play off of cloud crowd commotion.

Is VPLEX a replacement for storage system based tiering and replication?

I do not believe so and even though some vendors are making claims that tiered storage is dead, just like some vendors declared a couple of years ago that disk drives were going to be dead this year at the hands of SSD, neither has come to life so to speak pun intended. What this means for VPLEX is that it leverages underlying automated or manual tiering found in storage systems such as EMC FAST enabled or similar policy and manual functions in third party products.

What VPLEX brings to the table is the ability to transparently present a LUN or volume locally or over distance with shared access while maintaining cache and data coherency. This means that if a LUN or volume moves the applications or file system or volume managers expecting to access that storage will not be surprised, panic or encounter failover problems. Of course there will be plenty of details to be dug into and seen how it all actually works as is the case with any new technology.

Who is this for?

I see this as for environments that need flexibility and agility across multiple storage systems either from one or multiple vendors on a local or metro or wide area basis. This is for those environments that need ability to move workloads, applications and data between different storage systems and sites for maintenance, upgrades, technology refresh, BC/DR, load balancing or other IRM functions similar to how they would use virtual server migration such as VMotion or Live migration among others.

Do VPLEX and Virtual Storage eliminate need for Storage System functionality?

I see some storage virtualization solutions or appliances that have a focus of replacing underlying storage system functionality instead of coexisting or complementing. A way to test for this approach is to listen or read if the vendor or provider says anything along the lines of eliminating vendor lock in or control of the underlying storage system. That can be a sign of the golden rule of virtualization of whoever controls the virtualization functionality (at the server hypervisor or storage) controls the gold! This is why on the server side of things we are starting to see tiered hypervisors similar to tiered servers and storage where mixed hypervisors are being used for different purposes. Will we see tiered storage hypervisors or virtual storage solutions the answer could be perhaps or it depends.

Was Invista a failure not going into production and this a second attempt at virtualization?

There is a popular myth in the industry that Invista never saw the light of day outside of trade show expo or other demos however the reality is that there are actual customer deployments. Invista unlike other storage virtualization products had a different focus which was that around enabling agility and flexibility for common IRM tasks, similar the expanded focus of VPLEX. Consequently Invista has often been in apples to oranges comparison with other virtualization appliances that have as focus pooling along with other functions or in some cases serving as an appliance based storage system.

The focus around Invista and usage by those customers who have deployed it that I have talked with is around enabling agility for maintenance, facilitating upgrades, moves or reconfiguration and other common IRM tasks vs using it for pooling of storage for consolidation purposes. Thus I see VPLEX extending on the vision of Invista in a role of complimenting and leveraging underlying storage system functionality instead of trying to replace those capabilities with that of the storage virtualizer.

Is this a replacement for EMC Invista?

According to EMC the answer is no and that customers using Invista (Yes, there are customers that I have actually talked to) will continue to be supported. However I suspect that over time Invista will either become a low end entry for VPLEX, or, an entry level VPLEX solution will appear sometime in the future.

How does this stack up or compare with what others are doing?

If you are looking to compare to cache centric platforms such as IBMs SVC that adds extensive functionality and capabilities within the storage virtualization framework this is an apples to oranges comparison. VPLEX is providing cache pointers on a local and global basis functioning in a compliment to underlying storage system model where SVC caches at the specific cluster basis and enhancing functionality of underlying storage system. Rest assured there will be other apples to oranges comparisons made between these platforms.

How will this be priced?

When I asked EMC about pricing, they would not commit to a specific price prior to the announcement other than indicating that there will be options for on demand or consumption (e.g. cloud pricing) as well as pricing per engine capacity as well as subscription models (pay as you go).

What is the overhead of VPLEX?

While EMC runs various workload simulations (including benchmarks) internally as well as some publicly (e.g. Microsoft ESRP among others) they have been opposed to some storage simulation benchmarks such as SPC. The EMC opposition to simulations such as SPC have been varied however this could be a good and interesting opportunity for them to silence the industry (including myself) who continue ask them (along with a couple of other vendors including IBM and their XIV) when they will release public results.

What the interesting opportunity I think is for EMC is that they do not even have to benchmark one of their own storage systems such as a CLARiiON or VMAX, instead simply show the performance of some third party product that already is tested on the SPC website and then a submission with that product running attached to a VPLEX.

If the performance or low latency forecasts are as good as they have been described, EMC can accomplish a couple of things by:

  • Demonstrating the low latency and minimal to no overhead of VPLEX
  • Show VPLEX with a third party product comparing latency before and after
  • Provide a comparison to other virtualization platforms including IBM SVC

As for EMC submitting a VMAX or CLARiiON SPC test in general, Im not going to hold my breath for that, instead, will continue to look at the other public workload tests such as ESRP.

Additional related reading material and links:

Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier)
Chapter 3: Networking Your Storage
Chapter 4: Storage and IO Networking
Chapter 6: Metropolitan and Wide Area Storage Networking
Chapter 11: Storage Management
Chapter 16: Metropolitan and Wide Area Examples

The Green and Virtual Data Center (CRC)
Chapter 3: (see also here) What Defines a Next-Generation and Virtual Data Center
Chapter 4: IT Infrastructure Resource Management (IRM)
Chapter 5: Measurement, Metrics, and Management of IT Resources
Chapter 7: Server: Physical, Virtual, and Software
Chapter 9: Networking with your Servers and Storage

Also see these:

Virtual Storage and Social Media: What did EMC not Announce?
Server and Storage Virtualization – Life beyond Consolidation
Should Everything Be Virtualized?
Was today the proverbial day that he!! Froze over?
Moving Beyond the Benchmark Brouhaha

Closing comments (For now):
As with any new vision, initiative, architecture and initial product there will be plenty of questions to ask, items to investigate, early adopter customers or users to talk with and determine what is real, what is future, what is usable and practical along with what is nice to have. Likewise there will be plenty of mud ball throwing and slinging between competitors, fans and foes which for those who enjoy watching or reading those you should be well entertained.

In general, the EMC vision and story builds on and presumably delivers on past industry hype, buzz and vision with solutions that can be put into environments as productivity tool that works for the customer, instead of the customer working for the tool.

Remember the golden rule of virtualization which is in play here is that whoever controls the virtualization or associated management controls the gold. Likewise keep in mind that aggregation can cause aggravation. So do not be scared, however look before you leap meaning do your homework and due diligence with appropriate levels of expectations, aligning applicable technology to the task at hand.

Also, if you have seen or experienced something in the past, you are more likely to have DejaVu as opposed to seeing things as revolutionary. However it is also important to leverage lessons learned for future success. YottaYotta was a lot of NaddaNadda, lets see if EMC can leverage their past experiences to make this a LottaLotta.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Happy Earth Day 2010!

Here in the northern hemisphere it is late April and thus mid spring time.

That means the trees sprouting their buds, leaves and flowering while other plants and things come to life.

In Minnesota where I live, there is not a cloud in the sky today, the sun is out and its going to be another warm day in the 60s, a nice day to not be flying or traveling and thus enjoy the fine weather.

Among other things of note on this earth day 2010 include:

  • Minnesota Twins new home Target Field was just named the most Green Major League Baseball (MLB) stadium as well as greenest in the US with its LEED (or see here) certification.
  • Icelands Eyjafjallajokull volcano continues to spew water vapor steam, CO2 and ash at a slower rate than last week when it first erupted with some speculating that there could be impending activity from other Icelandic volcanos. Some estimates placed the initial eruption CO2 impact and subsequent flight cancellations to be neutral, essentially canceling each other out, however Im sure we will be hearing many different stories in the weeks to come.

  • Image of Iceland Eyjafjallajokull Volcano Eruption via Boston.com

  • Flights to/from and within Europe and the UK are returning to normal
  • Toyota continues to deal with recalls on some of their US built automobiles including the energy efficient Prius, some of which may have been purchased during the recent US cash for clunkers (CFC) program (hmm, is that ironic or what?)
  • Greenpeace in addition to using a Facebook page to protest Facebook data center practices is now targeting cloud IT in general including just before the Apple iPad launch (Heres some comments from Microsoft).
  • Vendors in all industries are lining up for the second coming of Green marketing or perhaps Green Washing 2.0

The new Green IT, moving beyond Green wash and hype

Speaking of Green IT including Green Computing, Green Storage, Virtualization, Cloud, Federation and more, here is a link to a post that I did back in February discussing how the Green Gap continues to exist.

The green gap exists and centers around the confusion of what Green means along with the common disconnects between core IT issues or barriers to becoming more efficient, effective, flexible and optimized from both an economic as well as environmental basis to those commonly messaged to under the green umbrella (read more here).

Regardless of where you stand on Green, Green washing, Green hype, environmentalism, eco-tech and other related themes, for at least a moment, set aside the politics and science debates and think in terms of practicality and economics.

That is, look for simple, recurring things that can be done to stretch your dollar or spending ability in order to support demand (See figure below) in a more effective manner along with reducing waste. For example to meet growing demand requirements in the face of shrinking or stagnate budgets, the action is to stretch available resources to do more work when needed, or retain more where applicable with the same or less footprint. What this means is that while common messaging is around reducing costs, look at the inverse which is to do more with available budgets or resources. The result is green in terms of economic and environmental benefits.

IT Resource demand
Increasing IT Resource Demand

Green IT wheel of oppourtunity
Green IT enablement techniques and technologies

Look at and understand the broader aspects of being green which has both economical and environmental benefits without compromising on productivity or functionality. There are many aspects or facets of being green beyond those commonly discussed or perceived to be so (See Green IT enablement techniques and technologies figure above).

Certainly recycling of paper, water, aluminum, plastics and other items including technology equipment are important to reduce waste and are things to consider. Another aspect of reducing waste particularly in IT is to avoid rework that can range from finding network bottlenecks or problems that result in continuous retransmission of data for failed backup, replication or data transfers that cause lost opportunity or resource consumption. Likewise programming errors (bugs) or miss configuration that results in rework or lost productivity also are forms of waste among others.

Another theme is that of shifting from energy avoidance to energy efficiency and effectiveness which are often thought to the same. However the expanded focus is also about getting more work done when needed with the same or less resources (See figure below) for example increasing activity (IOPS, transactions, emails or video served, bandwidth or messages) per watt of energy consumed.

From energy avoidence to effectiveness
Shifting from energy avoidance to effectiveness

One of the many techniques and approaches for addressing energy including stretching resources and being green include intelligent power management (IPM). With IPM, the focus is not strictly centered around energy avoidance, instead about inteligently adapting to different workloads or activity balancing performance and energy. Thus when there is work to be done, get the work done quickly with as little energy as possible (IOP or activity per watt), when there is less work, provide lower performance and thus smaller energy requirements, or when no work to be done, going into additional energy saving modes. Thus power management does not have to be exclusively about turrning off the lights or IT equipment in order to be green.

The following two figures look at Green IT past, present and future with an expanding focus around optimization and effectiveness meaning getting more work done, storing more data for longer periods of time, meeting growth demands with what appears to be additional resources however at a lower per unit cost without compromising on performance, availability or economics.

Green IT wheel of oppourtunity
Green IT: Past, present and future shift from avoidance to efficiency and effectiveness

Green IT wheel of oppourtunity
The new Green IT: Boosting business effectiveness, maximize ROI while helping the environment

If you think about going green as simply doing or using things more effectively, reducing waste, working more intelligently or effectively the benefits are both economical and environmentally positive (See the two figures above).

Instead of finding ways to fund green initiatives, shift the focus to how you can enable enhanced productivity, stretching resources further, doing more in the same or smaller footprint (floor space, power, cooling, energy, personal, licensing, budgets) for business economic and environmental sustainability with the result being environmental encampments.

Also keep in mind that small percentage changes on a large or recurring basis have significant benefits. For example a small change in cooling temperatures while staying within vendor guideline recommendations can result in big savings for large environments.

 

Bottom line

If you are a business and discounting green as simply a fad, or perhaps as a public relations (PR) initiative or activity tied to reducing carbon footprints and recycling then you are missing out on economic (top and bottom line) enhancement opportunities.

Likewise if you think that going green is only about the environment, then there is a missed opportunity to boost economic opportunities to help fund those inititiaves.

Going green means many different things to various people and is often more broad and common sense based than most realize.

That is all for now, happy earth day 2010

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Spring 2010 StorageIO Newsletter

Welcome to the spring 2010 edition of the Server and StorageIO (StorageIO) news letter.

This edition follows the inaugural issue (Winter 2010) incorporating feedback and suggestions as well as building on the fantastic responses received from recipients.

A couple of enhancements included in this issue (marked as New!) include a Featured Related Site along with Some Interesting Industry Links. Another enhancement based on feedback is to include additional comment that in upcoming issues will expand to include a column article along with industry trends and perspectives.

StorageIO News Letter Image
Spring 2010 Newsletter

You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the spring 2010 newsletter as HTML or PDF or, to go to the newsletter page.

Follow via Goggle Feedburner here or via email subscription here.

You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Also, a very big thank you to everyone who has helped make StorageIO a success!.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Its US Census time, What about IT Data Centers?

It is that once a decade activity time this year referred to as the US 2010 Census.

With the 2010 census underway, not to mention also time for completing and submitting your income tax returns, if you are in IT, what about measuring, assessing, taking inventory or analyzing your data and data center resources?

US 2010 Cenus formsUS 2010 Cenus forms
Figure 1: IT US 2010 Census forms

Have you recently taken a census of your data, data storage, servers, networks, hardware, software tools, services providers, media, maintenance agreements and licenses not to mention facilities?

Likewise have you figured out what if any taxes in terms of overhead or burden exists in your IT environment or where opportunities to become more optimized and efficient to get an IT resource refund of sorts are possible?

If not, now is a good time to take a census of your IT data center and associated resources in what might also be called an assessment, review, inventory or survey of what you have, how its being used, where and who is using and when along with associated configuration, performance, availability, security, compliance coverage along with costs and energy impact among other items.

IT Data Center Resources
Figure 2: IT Data Center Metrics for Planning and Forecasts

How much storage capacity do you have, how is it allocated along with being used?

What about storage performance, are you meeting response time and QoS objectives?

Lets not forget about availability, that is planned and unplanned downtime, how have your systems been behaving?

From an energy or power and cooling standpoint, what is the consumption along with metrics aligned to productivity and effectiveness. These include IOPS per watt, transactions per watt, videos or email along with web clicks or page views per watt, processor GHz per watt along with data movement bandwidth per watt and capacity stored per watt in a given footprint.

Other items to look into for data centers besides storage include servers, data and I/O networks, hardware, software, tools, services and other supplies along with physical facility with metrics such as PUE. Speaking of optimization, how is your environment doing, that is another advantage of doing a data center census.

For those who have completed and sent in your census material along with your 2009 tax returns, congratulations!

For others in the US who have not done so, now would be a good time to get going on those activities.

Likewise, regardless of what country or region you are in, its always a good time to take a census or inventory of your IT resources instead of waiting every ten years to do so.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

March Metric Madness: Fun with Simple Math

Its March and besides being spring in north America, it also means tournament season including the NCAA basket ball series among others known as March Madness.

Given the office pools and other forms of playing with numbers tied to the tournaments and real or virtual money, here is a quick timeout looking at some fun with math.

The fun is showing how simple math can be used to show relative growth for IT resources such as data storage. For example, say that you have 10Tbytes of storage or data and that it is growing at only 10 percent per year, in five years with simple math yields 14.6Tbytes.

Now lets assume growth rate is 50 percent per year and in the course of five years, instead of having 10Tbytes, that now jumps to 50.6Tbytes. If you have 100Tbytes today and at 50 percent growth rate, that would yield 506.3 Tbytes or about half of a petabyte in 5 years. If by chance you have say 1Pbyte or 1,000Tbytes today, at 25% year of year growth you would have 2.44Pbytes in 5 years.
Basic Storage Forecast
Figure 1 Fun with simple math and projected growth rates

Granted this is simple math showing basic examples however the point is that depending on your growth rate and amount of either current data or storage, you might be surprised at the forecast or projected needs in only five years.

In a nutshell, these are examples of very basic primitive capacity forecasts that would vary by other factors including if the data is 10Tbytes and your policies is for 25 percent free space, that would require even more storage than the base amount. Go with a different RAID level, some extra space for replication, snapshots, disk to disk backups and replication not to mention test development and those numbers go up even higher.

Sure those amounts can be offset with thin provisioning, dedupe, archiving, compression and other forms of data footprint reduction, however the point here is to realize how simple math can portray a very basic forecast and picture of growth.

Read more about performance and capacity in Chapter 10 – Performance and capacity planning for storage networks – Resilient Storage Networks (Elsevier) as well as at www.cmg.org (Computer Measurement Group)..

And that is all I have to say about this for now, enjoy March madness and fun with numbers.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Hard product vs. soft product

In the IT industry space and data storage or computers and servers particularly so, mention hard product or software product and what comes to mind?

How about physical vs. virtual servers or storage, hardware vs. software solutions, products vs. services?

By contrast, in the aviation and airline industry among others, mention hard vs. soft product and there is a slight variation, which is the difference between one providers service delivery experience.

For example, two or more different airlines or carriers may fly the same aircraft perhaps even with the same engines, instrumentation, navigation electronics and base features, all part of the hard product.

However, their hard product could vary by type of seats, spacing or pitch along with width, overhead luggage room, Video on Demand (VoD) or In Flight Entertainment (IFE) as well as different cabin treatments (carpeting, wall coverings) and galley configurations. Even in scenarios where carriers have the same equipment and hard product, their soft product can differ.

Example of a Soft Product, that is service (or lack there of) being delivered

Example of a Soft Product (Service or lack there of being delivered)

The soft product is the service delivery experience including by the cabin crew (flight attendants and pursers), food (or lack of), beverage, presentation and so forth. Also part of the soft product can be how seats are allocated or available for selection, boarding process and other items that contribute to the overall customer experience.

This all got me thinking on a recent flight where the hard product (e.g. aircraft) of a particular carrier was identical; however given transitions taking place, the soft product still differed as was not fully integrated or merged yet. What the experience got me thinking about is that in IT, customers or solution providers can buy the same technology or hard product (hardware, software, services) from the same suppliers yet present different soft products or service experience to their customers.

Example IT hard product (hardware and software) delivering soft product services

IT equipment being used for delivery of different soft products

Im sure that some of the cloud crowd cheerleaders might even jump up and down and claim that is the benefit of using managed service producers or similar services to obtain a different soft product. And while that may be true in some instances, it is also true that different traditional IT organizations are able to craft and deploy various types of soft products to their customers to meet different service requirements, cost or economic objectives using the same technology used by others.

A different example of hard vs soft product is a site I have visited that has mainframes, windows and open systems servers whose business requires a soft product that is highly available, reliable, flexible, fast and affordable. Needless to say, in that environment, some of the open systems including windows platforms can have reliability close to if not equal to the mainframes.

Example IT hard product (hardware and software) delivering soft product services
IT equipment being used for delivery of different soft products

What is even more amazing is that no special or different hard products (e.g. servers, storage, networks or software) are being used to achieve those services objectives. Rather it is the soft product that achieves the results in terms of how the techniques are used and managed. Likewise I have heard of other environments that have mixed mainframe and open systems, using common hard products as other organizations yet whose soft product is not as robust or reliable as others. If using the same hard product that is same software, hardware, networks and services, how could the soft product be any less robust?

The answer is that good and reliable technology is important, however the technology is only as good as how it is managed, configured, monitored and deployed centering on processes, procedures and best practices.

Next time you are on an airplane, or, using some other service that leverages common technologies (hardware or software or networks) take a moment to look around at the soft product and how the service experience of a common hard product can vary. That is, using common technology, how can various best practices, policies and operating principals to meet diverse service requirements differ to meet demand as well as economic requirements.

What is your take and experience on different hard vs soft products in or around IT?

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Virtual Storage and Social Media: What did EMC not Announce?

Synopsis: EMC made a vision statement in a recent multimedia briefing that has a social networking angle as well as storage virtualization, virtual storage, public and private clouds.

Basically EMC provided a vision preview of in a social media networking friendly manner of a vision being refereed to initially as EMC Virtual Storage (aka twitter hash tag #emcvs) which of course sounds similar to a pharmacy chain.

The vision includes stirring up the industry with a new discussion around virtual storage compared to the decade old coverage of storage virtualization.

The underlying theme of this vision is similar to that of virtual serves vs. server virtualization including the ability to move servers around, so to should there be the ability to move data around more freely on a local or global basis and in real or near real time. In other words, breaking the decades long affinity that has existed between data storage and the data that exists on it (Figure 1). Buzzword bingo themes include federated storage, virtual storage, public and private cloud along with global cache coherency among others.


Figure 1: EMC Virtual Storage (EMCVS) Vision

The rest of the story

On Thursday March 11th 2010 Pat Gelsinger (EMC President and COO, Information Infrastructure Products) held an interactive briefing with the global analyst community pertaining to future EMC trajectory or visions. One of the interesting things about this session was that it was not unique to industry analysts nor was it under NDA.

For example, here is a link that if still active, should provide access to the briefing material.

The vision being talked about include those that EMC has talked about in the past such as virtualized data centers, or, putting a spin on the phrase data center virtualization, along with public and private clouds as well as  infrastructure  resource management virtualization (Figure 2):


Figure 2: Public and Private Clouds along with Virtual Data Centers

Figure 2 is a fairly common slide used in many EMC discussions positing public and private clouds along with virtualized data centers.


Figure 3: Tenants of the EMC Virtual Storage (EMCVS) vision


Figure 4: Enabling mobile data, breaking data and storage affinity


Figure 5: Enabling teleporting and virtual storage

Thus setting up the story for the need and benefit of distributed cache coherency, similar to distributed lock management (DLM) used on local and wide area clustered file systems for maintain data integrity.


Figure 6: Leveraging distributed cache coherency

This discussion around distributed cache coherency should ring Dejavu of IBM GDPS (Global Dispersed Parallel Sysplex) for Mainframe, OpenVMS distributed lock management for VAX and Alpha clusters, Oracle RAC, or other parallel and clustered file systems among others. Likewise for those familiar with technology from Yotta Yotta, this should also ring familiar.

However while many are jumping on the Yotta Yotta familiarity bandwagon given comments made by Pat Gelsinger, something that came to mind is what about EMC GDDR? Do not worry if that is an acronym or product you are not up on as an EMC follower as it stands for EMC Geographically Dispersed Disaster (GDDR) solution that is an alternative to IBMs proprietary GDPS. Perhaps there is none, perhaps this is some, however what role if any including lessons learned will come from EMCs experience with GDDR not to mention other clustered file systems?


Figure 7: The EMC vision as presented

One of the interesting things about the vision announcement and perhaps part of floating it out for discussion was a comment made by Pat Gelsinger. That comment was about enabling the wild Wild West for IT, something that perhaps one generation might enjoy, however a notion another would soon forget. Im sure the EMC marke3ting team including their new chief marketing officer (CMO) Jeremy Burton can fine tune with time.
 

More on the social networking and non NDA angle

As is often the case with many other vendors, these types of customer, partner, analyst or media briefings (either online or in person) are under some form of NDA or embargo as they contain forward looking, yet to be announced products, solutions, technologies or other business initiatives. Note, these types of NDA discussions are not typically the same as those that portray or pretend to be NDA in order to sound more important a few days before an announcement that has already been leaked to get extra coverage or what are also known as media embargos.

After some amount of time, usually the information is formerly made public that was covered in advanced briefings, along with additional details. Sometimes material covered under NDA is done so in advanced such that third parties can prepare reports, deep dive analysis or assessment and other content that is made available at announcement or shortly there. The material is often prepared partners, vars, media, analysts, consultants, customers or others outside of the announcing company via different venues ranging from print, online columns, blogs, tweets videos and more.

Lately there has been some confusion in the broader IT as well as other industries as to where and how to classify bloggers, tweeters or other social media practionier. After all, is a blogger an analyst, journalist, free lance writer, advisor, vendor, consultant, customer, var, investor, hobbyist, competitor not to mention how does information get feed to them?

Likewise, NDAs and embargo have joined the list of fodder topics that some do not like for various reasons yet like to complain about for others. There is a time and place for real NDAs that cover and address material, discussions and other information that should not be shared. However all to often NDAs get watered down particularly on the press release games where a vendor or public relations firm (PR) will dangle an announcement briefing a couple of days or perhaps a week or two prior to an announcement under the guise that it not be disclosed prior to formal announcement.

Where these NDAs get tricky is that often they are honored by some and ignored by others, thus, those who honor the agreement get left behind by those who break the story. Personally I do not mind real NDA that are tied to real confidential material, discussion or other information that needs to be kept under wraps for various reasons. However the value or issues of NDA is whole different discussion, for now, lets get back to what EMC did not announce in their recent non-NDA briefing.

Different organizations are addressing social media in various ways, some ignoring it, others embracing it regardless of what it is. EMC is an example of a vendor who has embraced social networking and social media along with traditional means of developing and maintaining relations with the media (media or press relations), customers, partners, vars, consultants, investors (e.g. investor relations) as well as analysts (analyst relations).

For example, EMC works with analysts in traditional ways as they do with the media and other groups, however they also recognize that while some analysts (or media or investors or partners or customers or vars etc) blog and tweet (among other social networking mediums), not all do (as is also the case with media, customers, vars and so forth). Likewise EMC from a social media and networking perspective does not appear to define audiences based on the medium or tool that they use, rather, in a matrix or multi dimensional approach.

That is, an analyst with a blog is a blogger, a var or independent consultant with a blog is a blogger, or a media person including free lance writers, journalist, reporters or publisher with a blog is a blogger as are vars, advisors, partners and competitors with blogs also treated as bloggers.



Some of the 2009 EMC Bloggers Lounge Visitors

Thus at their EMCworld event, admission to the bloggers lounge is as simple and non exclusive as having a blog to join regardless of what your role or usage of a blog happens to be. On the other hand, information is communicated via different channels such as for traditional press via public relations folks, investors through investors relations, analysts via analyst relations, partners and customers through their venues and so forth.

When you think about it, makes sense as after all, EMC sells and attaches storage to mainframes, open systems Windows, UNIX, Linux as well as virtual servers that use different tools, protocols, languages and points of interest. Thus it should not be surprising that their approach to communicating with different audiences leverage various mediums for diverse messages at multiple points in time.

 

What does all of this social media discussion have to do with the March 11 EMC event?

In my opinion, this was an experiment of sorts of EMC to test the waters by floating a new vision to their traditional  pre brief audience in advance of talking with media prior to an actual announcement.

That is, EMC did not announce a new product, technology, initiative, business alliance or customer event, rather a vision and trajectory or signaling what they may be doing in the future.

How this ties to social media and networking is that rather than being an event only for those media, bloggers, tweeters, customers, consultants, vars, free lancers, partners or others who agreed to do so under NDA, EMC used the venue as an advance sounding board of sorts.

That is, by sticking to broad vision vs. propriety and confidential or sensitive topics, the discussion has been put out in advance in the open to stimulate discussion in traditional reports, articles, columns or related venues not to mention in temporal real time via twitter not to mention via blogs and beyond.

Does this mean EMC will be moving away from NDAs anytime soon? I do not think so as there is still very much a need for advanced (and not a couple of weeks prior to announcement) types of discussion around sensitive information. For example with the trajectory or visionary discussion last week by EMC, the short presentation and discussion, limited slides prompt more questions than they address.

Perhaps what we are seeing is a new approach or technique of how organizations can use and bring social networking mediums into the mainstream business process as opposed to being perceived as niche or experimental mediums.

The reason I think it was an experiment is that EMC practices both traditional analyst/media relations along with emerging social media networking relations that includes practioners that span both audiences. For some the social media bloggers and tweeters are a different audience than traditional media, writers, consultants or analysts, that is, they are a separate and unique audience.

Thus, it is in my opinion and like human knees, elbows, feet, hands, ears as well as, well, you get the picture I think that there are many different views or thoughts not to mention interpretations of social media, social networking, blogging, analysts, consultants, advisors, media or press, customers, partners, and so on with diverse roles, functions and needs.

Where this comes back to the topic of last weeks discussion is that of storage virtualization vs. virtual storage. Rest assured in the time since the EMC briefing and certainly in the weeks or months to come, there will be penalty of knees, elbows, hands and other body parts flying and signaling what is a particular view or definition of storage virtualization vs. virtual storage.

Of course, some of these will be more entertaining than others ranging from well rehearsed, in some cases over the past decade or more to new and perhaps even revolutionary ones of what is and what is not storage virtualization vs. virtual storage, let alone cloud vs. cluster vs. grid vs. federated and beyond.

 

Additional Comments and thoughts

In general, I like the trajectory vision EMC is rolling out even if it causes confusion between what is virtual storage vs. storage virtualization, after all, we have been hearing about storage virtualization for over a decade now if not longer. Likewise, there has been plenty of talk about public clouds so it is refreshing to see more discussion and less cloud ware or cloud marketecture and how to actually leverage what you have to adopt private cloud practices.

I suspect that as the EMC competition starts to hear or piece together what they think this vision is or is not, we should also start to hear some interesting stories, spins, counter pitches, debates, twitter fights, blog slams and YouTube videos, all of which also happen to consume more storage.

I also like what EMC is doing with social media and networking as a means or medium for building and maintain relationships as well as for information exchange complimenting traditional means and mediums.  

In other words, EMC is succeeding with social networking by not using it just as another megaphone to talk at or over people, rather, as a means to engage, to get to know, to challenge, to exchange regardless of if you are a so called independent blogger, twitter, analyst, medial, constant, customer, var, investor, partner among others.

If you are not already doing so, here are some EMC folks who actively participate in two way dialogues across different areas with @lendevanna helping to facilitate and leverage the masses of various people and subject matter experts including @chuckhollis @c_weil @cxi @davegraham @gminks @mike_fishman @stevetodd @storageanarchy @storagezilla @Stu and @vcto among many others.

Note that for you non twitter types, the previous are twitter handles (names or addresses) that can be accessed by putting https://twitter.com in place of the @ sign. For example @storageio = https://twitter.com/storageio

 

Additional Comments and thoughts:

Some comments and thoughts among others that I posted via twitter last week during the briefing event:

Here are some twitter comments that I posted last week during the event with hash tag #emcvs:

Is what was presented on the #emcvs #it #storage #virtualization call NDA material = Negative
Is what was presented on the #emcvs #it #storage #virtualization call a product announcement = NOpe
Is what was presented on the #emcvs #it #storage #virtualization call a statement of direction = Kind of
Is what was presented on the #emcvs #it #storage #virtualization call a hint of future functionality = probably
Is what was presented on the #emcvs #it #storage #virtualization call going to be shared with general public = R U reading this?
Is what was presented on the #emcvs #it #storage #virtualization call going to be discussed further = Yup
Is what was presented on the #emcvs #it #storage #virtualization call going to confuse the industry = Maybe
Is what was presented on the #emcvs #it #storage #virtualization call going to confuse customers = Depends on story teller
Is what was presented on the #emcvs #it #storage #virtualization call going to confuse competition = probably
Is what was presented on the #emcvs #it #storage #virtualization call going to provide fodder/fuel for bloggers = Yup
Anything else to add about #emcvs #it #storage #virtualization call today = Stay tuned, watch and listen for more!

Some additional questions and my perspectives on those include:

  • What did EMC announce? Nothing, it was not an announcement; it was a statement of vision.
  • Why did EMC hold a briefing without an NDA and yet nothing was announced? It is my opinion that EMC has a vision that they want to float an idea or direction, thus, sharing a vision to get discussions going without actually announcing a specific product or technology.
  • Is this going to be a repackaged version of the Invista storage virtualization platform? I do not believe so.
  • Is this going to be a repackaged version of the intellectual property (IP) assets that EMC picked up from the defunct startup called Yotta Yotta? Given some references to, along with what some of the themes and discussions center around, it is my guess that there is some Yotta Yotta IP along with other technologies that may be part of any future possible solution.
  • Who or what is YottaYotta? They were a late dot com startup founded in 2000 that went through various incarnations and value propositions with some solutions that shipped. Some of the late era IP included distributed cache coherency and distance enablement of large scale federated storage on a global basis.
  • Can the Yotta Yotta (or here) technology really scale? That remains to be seen, Yotta Yotta had some interesting demos, proof of concept, early adopters and big plans, however they also amounted to Nada Nada, perhaps EMC can make a Lotta Lotta out of it!

 

Other questions are still waiting for answers including among others:

  • Will EMC Virtual Storage (aka emcvs) become a common cure for typical IT infrastructure ailments?
  • Will this restart the debate around the golden rule of virtualization being whoever controls the virtualization controls the gold and thus vendors lock in?
  • Will this be a members only vision where only certain partners can participate?
  • What will other competitors respond with, technology, and marketecture, FUD or something else?
  • What are the specific details of when, where and how the vision is implemented?
  • What will all of this cost, will it work with existing products or is a forklift upgrade needed?
  • Has EMC bitten off more than they can chew or deliver on or is Pat Gelsinger and his crew racing down a mountain and out in front of their skis, or, is this brilliance beyond what we mere mortals can yet comprehend?
  • Can global data cache coherency really be deployed with data integrity on a global and large scale without negatively impacting performance?
  • Can EMC make Lotta Lotta with this vision?

 

Here is what some of the EMC bloggers have had to say so far:

Chuck Hollis aka @chuckhollis had this to say

Stuart Miniman aka @stu had this to say

 

Summing it up for now

Lets see how the rest of the industry responds to this as the vision rolls out and perhaps sooner vs. later becomes technology that gets deployed and used.

Im skeptical until more details are understood, however I also like it and intrigued by it if it can actually jump from Yotta Yotta slide ware to Lotta Lotta deployments.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Green IT, Green Gap, Tiered Energy and Green Myths

There are many different aspects of Green IT along with several myths or misperceptions not to mention missed opportunities.

There is a Green Gap or disconnect between environmentally aware, focused messaging and core IT data center issues. For example, when I ask IT professionals whether they have or are under direction to implement green IT initiatives, the number averages in the 10-15% range.

However, when I ask the same audiences who has or sees power, cooling, floor space, supporting growth, or addressing environmental health and safety (EHS) related issues, the average is 75 to 90%. What this means is a disconnect between what is perceived as being green and opportunities for IT organizations to make improvements from an economic and efficiency standpoint including boosting productivity.

 

Some IT Data Center Green Myths
Is “green IT” a convenient or inconvenient truth or a legend?

When it comes to green and virtual environments, there are plenty of myths and realities, some of which vary depending on market or industry focus, price band, and other factors.

For example, there are lines of thinking that only ultra large data centers are subject to PCFE-related issues, or that all data centers need to be built along the Columbia River basin in Washington State, or that virtualization eliminates vendor lock-in, or that hardware is more expensive to power and cool than it is to buy.

The following are some myths and realities as of today, some of which may be subject to change from reality to myth or from myth to reality as time progresses.

Myth: Green and PCFE issues are applicable only to large environments.

Reality: I commonly hear that green IT applies only to the largest of companies. The reality is that PCFE issues or green topics are relevant to environments of all sizes, from the largest of enterprises to the small/medium business, to the remote office branch office, to the small office/home office or “virtual office,” all the way to the digital home and consumer.

 

Myth: All computer storage is the same, and powering disks off solves PCFE issues.

Reality: There are many different types of computer storage, with various performance, capacity, power consumption, and cost attributes. Although some storage can be powered off, other storage that is needed for online access does not lend itself to being powered off and on. For storage that needs to be always online and accessible, energy efficiency is achieved by doing more with less—that is, boosting performance and storing more data in a smaller footprint using less power.

 

Myth: Servers are the main consumer of electrical power in IT data centers.

Reality: In the typical IT data center, on average, 50% of electrical power is consumed by cooling, with the balance used for servers, storage, networking, and other aspects. However, in many environments, particularly processing or computation intensive environments, servers in total (including power for cooling and to power the equipment) can be a major power draw.

 

Myth: IT data centers produce 2 to 8% of all global Carbon Dioxide (CO2) and carbon emissions.

Reality:  Thus might be perhaps true, given some creative accounting and marketing math in order to help build a justification case or to scare you into doing something. However, the reality is that in the United States, for example, IT data centers consume around 2 to 4% of electrical power (depending on when you read this), and less than 80% of all U.S. CO2 emissions are from electrical power generation, so the math does not quite add up. The reality is this, if no action is taken to improve IT data center energy efficiency, continued demand growth will shift IT power-related emissions from myth to reality, not to mention cause constraints on IT and business sustainability from an economic and productivity standpoint.

Myth: Server consolidation with virtualization is a silver bullet to address PCFE issues.

Reality: Server virtualization for consolidation is only part of an overall solution that should be combined with other techniques, including lower power, faster and more energy efficient servers, and improved data and storage management techniques.

 

Myth: Hardware costs more to power than to purchase.

Reality: Currently, for some low-cost servers, standalone disk storage, or entry level networking switches and desktops, this may be true, particularly where energy costs are excessively high and the devices are kept and used continually for three to five years. A general rule of thumb is that the actual cost of most IT hardware will be a fraction of the price of associated management and software tool costs plus facilities and cooling costs. For the most part, at least as of this writing, small standalone individual hard disk drives or small entry level volume servers can be bought and then used in locations that have very high electrical costs over a three  to five year time frame.

 

Regarding this last myth, for the more commonly deployed external storage systems across all price bands and categories, generally speaking, except for extremely inefficient and hot running legacy equipment, the reality is that it is still cheaper to power the equipment than to buy it. Having said that, there are some qualifiers that should also be used as key indicators to keep the equation balanced. These qualifiers include the acquisition cost  if any, for new, expanded, or remodeled habitats or space to house the equipment, the price of energy in a given region, including surcharges, as well as cooling, length of time, and continuous time the device will be used.

For larger businesses, IT equipment in general still costs more to purchase than to power, particularly with newer, more energy efficient devices. However, given rising energy prices, or the need to build new facilities, this could change moving forward, particularly if a move toward energy efficiency is not undertaken.

There are many variables when purchasing hardware, including acquisition cost, the energy efficiency of the device, power and cooling costs for a given location and habitat, and facilities costs. For example, if a new storage solution is purchased for $100,000, yet new habitat or facilities must be built for three to five times the cost of the equipment, those costs must be figured into the purchase cost.

Likewise, if the price of a storage solution decreases dramatically, but the device consumes a lot of electrical power and needs a large cooling capacity while operating in a region with expensive electricity costs, that, too, will change the equation and the potential reality of the myth.

 

Tiered Energy Sources
Given that IT resources and facilitated require energy to power equipment as well as keep them cool, electricity are popular topics associated with Green IT, economics and efficiency with lots of metrics and numbers tossed around. With that in mind, the U.S. national average CO2 emission is 1.34 lb/kWh of electrical power. Granted, this number will vary depending on the region of the country and the source of fuel for the power-generating station or power plant.

Like IT tiered resources (Servers, storage, I/O networks, virtual machines and facilities) of which there are various tiers or types of technologies to meet various needs, there are also multiple types of energy sources. Different tiers of energy sources vary by their cost, availability and environmental characteristics among others. For example, in the US, there are different types of coal and not all coal is as dirty when combined with emissions air scrubbers as you might be lead to believe however there are other energy sources to consider as well.

Coal continues to be a dominant fuel source for electrical power generation both in the United States and abroad, with other fuel sources, including oil, gas, natural gas, liquid propane gas (LPG or propane), nuclear, hydro, thermo or steam, wind and solar. Within a category of fuel, for example, coal, there are different emissions per ton of fuel burned. Eastern U.S. coal is higher in CO2 emissions per kilowatt hour than western U.S. lignite coal. However, eastern coal has more British thermal units (Btu) of energy per ton of coal, enabling less coal to be burned in smaller physical power plants.

If you have ever noticed that coal power plants in the United States seem to be smaller in the eastern states than in the Midwest and western states, it’s not an optical illusion. Because eastern coal burns hotter, producing more Btu, smaller boilers and stockpiles of coal are needed, making for smaller power plant footprints. On the other hand, as you move into the Midwest and western states of the United States, coal power plants are physically larger, because more coal is needed to generate 1 kWh, resulting in bigger boilers and vent stacks along with larger coal stockpiles.

On average, a gallon of gasoline produces about 20 lb of CO2, depending on usage and efficiency of the engine as well as the nature of the fuel in terms of octane or amount of Btu. Aviation fuel and diesel fuel differ from gasoline, as does natural gas or various types of coal commonly used in the generation of electricity. For example, natural gas is less expensive than LPG but also provides fewer Btu per gallon or pound of fuel. This means that more natural gas is needed as a fuel to generate a given amount of power.

Recently, while researching small, 10 to 12 kWh standby generators for my office, I learned about some of the differences between propane and natural gas. What I found was that with natural gas as fuel, a given generator produced about 10.5 kWh, whereas the same unit attached to a LPG or propane fuel source produced 12 kWh. The trade off was that to get as much power as possible out of the generator, the higher cost LPG was the better choice. To use lower cost fuel but get less power out of the device, the choice would be natural gas. If more power was needed, than a larger generator could be deployed to use natural gas, with the trade off of requiring a larger physical footprint.

Oil and gas are not used as much as fuel sources for electrical power generation in the United States as in other countries such as the United Kingdom. Gasoline, diesel, and other petroleum based fuels are used for some power plants in the United States, including standby or peaking plants. In the electrical power G and T industry as in IT, where different tiers of servers and storage are used for different applications there are different tiers of power plants using different fuels with various costs. Peaking and standby plants are brought online when there is heavy demand for electrical power, during disruptions when a lower cost or more environmentally friendly plant goes offline for planned maintenance, or in the event of a trip or unplanned outage.

CO2 is commonly discussed with respect to green and associated emissions however there are other so called Green Houses Gases including Nitrogen Dioxide (NO2) and water vapors among others. Carbon makes up only a fraction of CO2. To be specific, only about 27% of a pound of CO2 is carbon; the balance is not. Consequently, carbon emissions taxes schemes (ETS), as opposed to CO2 tax schemes, need to account for the amount of carbon per ton of CO2 being put into the atmosphere. In some parts of the world, including the EU and the UK, ETS are either already in place or in initial pilot phases, to provide incentives to improve energy efficiency and use.

Meanwhile, in the United States there are voluntary programs for buying carbon offset credits along with initiatives such as the carbon disclosure project. The Carbon Disclosure Project (www.cdproject.net) is a not for profit organization to facilitate the flow of information pertaining to emissions by organizations for investors to make informed decisions and business assessment from an economic and environmental perspective. Another voluntary program is the United States EPA Climate Leaders initiative where organizations commit to reduce their GHG emissions to a given level or a specific period of time.

Regardless of your stance or perception on green issues, the reality is that for business and IT sustainability, a focus on ecological and, in particular, the corresponding economic aspects cannot be ignored. There are business benefits to aligning the most energy efficient and low power IT solutions combined with best practices to meet different data and application requirements in an economic and ecologically friendly manner.

Green initiatives need to be seen in a different light, as business enables as opposed to ecological cost centers. For example, many local utilities and state energy or environmentally concerned organizations are providing funding, grants, loans, or other incentives to improve energy efficiency. Some of these programs can help offset the costs of doing business and going green. Instead of being seen as the cost to go green, by addressing efficiency, the by products are economic as well as ecological.

Put a different way, a company can spend carbon credits to offset its environmental impact, similar to paying a fine for noncompliance or it can achieve efficiency and obtain incentives. There are many solutions and approaches to address these different issues, which will be looked at in the coming chapters.

What does this all mean?
There are real things that can be done today that can be effective toward achieving a balance of performance, availability, capacity, and energy effectiveness to meet particular application and service needs.

Sustaining for economic and ecological purposes can be achieved by balancing performance, availability, capacity, and energy to applicable application service level and physical floor space constraints along with intelligent power management. Energy economics should be considered as much a strategic resource part of IT data centers as are servers, storage, networks, software, and personnel.

The bottom line is that without electrical power, IT data centers come to a halt. Rising fuel prices, strained generating and transmission facilities for electrical power, and a growing awareness of environmental issues are forcing businesses to look at PCFE issues. IT data centers to support and sustain business growth, including storing and processing more data, need to leverage energy efficiency as a means of addressing PCFE issues. By adopting effective solutions, economic value can be achieved with positive ecological results while sustaining business growth.

Some additional links include:

Want to learn or read more?

Check out Chapter 1 (Green IT and the Green Gap, Real or Virtual?) in my book “The Green and Virtual Data Center” (CRC) here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Post Holiday IT Shopping Bargains, Dell Buying Exanet?

For consumers, the time leading up to the holiday Christmas season is usually busy including door busters as well as black Friday among other specials for purchasing gifts and other items. However savvy shoppers will wait for after Christmas or the holidays altogether perhaps well into the New Year when some good bargains can become available. IT customers are no different with budgets to use up before the end of the year thus a flurry of acquisitions that should become evident soon as we are entering earnings announcement season.

However there are also bargains for IT organizations looking to take advantage of special vendor promotions trying to stimulate sales, not to mention for IT vendors to do some shopping of their own. Consequently, in addition to the flurry of merger and acquisition (M and A) activity from last summer through the fall, there has been several recent deals, some of which might make Monty Hall blush!

Some recent acquisition activity include among others:

  • Dell bought Perot systems for $3.9B
  • DotHill bought Cloverleaf
  • Texas Memory Systems (TMS) bought Incipient
  • HP bought IBRIX and 3COM among others
  • LSI bought Onstor
  • VMware bought Zimbra
  • Micron bought Numonyx
  • Exar bought Neterion

Now the industry is abuzz about Dell, who is perhaps using some of the lose change left over from holiday sales as being in the process of acquiring Israeli clustered storage startup Exanet for about $12M USD. Compared to previous Dell acquisitions including EqualLogic in 2007 for about $1.4B or last years Perot deal in the $3.9B range, $12M is a bargain and would probably not even put a dent in the selling and marketing advertising budget let alone corporate cash coffers which as of their Q3-F10 balance sheet shows about $12.795B in cash.

Who is Exanet and what is their product solution?
Exanet is a small Israeli startup providing a clustered, scale out NAS file serving storage solution (Figure 1) that began shipping in 2003. The Exanet solution (ExaStore) can be either software based, or, as a package solution ExaStore software installed on standard x86 servers with external RAID storage arrays combining as a clustered NAS file server.

Product features include global name space, distributed metadata, expandable file systems, virtual volumes, quotas, snapshots, file migration, replication, and virus scanning, and load balancing, NFS, CIFS and AFP. Exanet scales up to 1 Exabyte of storage capacity along with supporting large files and billions of file per cluster.

The target market that Exanet pursues is large scale out NAS where performance (either small random or large sequential I/Os) along with capacity are required. Consequently, in the scale out, clustered NAS file serving space, competitors include IPM GPFS (SONAS), HP IBRIX or PolyServe, Sun Lustre and Symantec SFS among others.

Clustered Storage Model: Source The Green and Virtual Data Center (CRC)
Figure 1 Generic clustered storage model (Courtesy The Green and Virtual Data Center(CRC)

For a turnkey solution, Exanet packaged their cluster file system software with various vendors storage combined with 3rd party external Fibre Channel or other storage. This should play well for Dell who can package the Exanet software on its own servers as well as leverage either SAS or Fibre Channel  MD1000/MD3000 external RAID storage among other options (see more below).

Click here to learn more about clustered storage including clustered NAS, clustered and parallel file systems.

Dell

Whats the dell play?

  • Its an opportunity to acquire some intellectual property (IP)
  • Its an opportunity to have IP similar to EMC, HP, IBM, NetApp, Oracle and Symantec among others
  • Its an opportunity to address a market gap or need
  • Its an opportunity to sell more Dell servers, storage and services
  • Its an opportunity time for doing acquisitions (bargain shopping)

Note: IBM also this past week announced their new bundled scale out clustered NAS file serving solution based on GPFS called SONAS. HP has IBRIX in addition to their previous PolyServe acquisition, Sun has ZFS and Lustre.

How does Exanet fit into the Dell lineup?

  • Dell sells Microsoft based NAS as NX series
  • Dell has an OEM relationship with EMC
  • Dell was OEMing or reselling IBRIX in the past for certain applications or environments
  • Dell has needed to expand its NAS story to balance its iSCSI centric storage story as well as compliment its multifunction block storage solutions (e.g. MD3000) and server solutions.

Why Exanet?
Why Exanet, why not one of the other startups or small NAS or cloud file system vendors including BlueArc, Isilon, Panasas, Parascale, Reldata, OpenE or Zetta among others?

My take is that probably because those were either not relevant to what Dell is looking for, lack of seamless technology and business fit, technology tied to non Dell hardware, technology maturity, the investors are still expecting a premium valuation, or, some combination of the preceding.

Additional thoughts on why Exanet
I think that Dell simply saw an opportunity to acquire some intellectual property (IP) probably including a patent or two. The value of the patents could be in the form of current or future product offerings, perhaps a negotiating tool, or if nothing else as marketing tool. As a marketing tool, Dell via their EqualLogic acquisition among others has been able to demonstrate and generate awareness that they actually own some IP vs. OEM or resell those from others. I also think that this is an opportunity to either fill or supplement a solution offering that IBRIX provided to high performance, bulk storage and scale out file serving needs.

NAS and file serving supporting unstructured data are a strong growth market for commercial, high performance, specialized or research as well as small business environments. Thus, where EqualLogic plays to the iSCSI block theme, Dell needs to expand their NAS and file serving solutions to provide product diversity to meet various customer applications needs similar to what they do with block based storage. For example, while iSCSI based EqualLogic PS systems get the bulk of the marketing attention, Dell also has a robust business around the PowerVault MD1000/MD3000 (SAS/iSCSI/FC) and Microsoft multi protocol based PowerVault NX series not to mention their EMC CLARiiON based OEM solutions (E.g. Dell AX, Dell/EMC CX).

Thus, Dell can complement the Microsoft multi protocol (block and NAS file) NX with a packaged (Dell servers and MD (or other affordable block storage) powered with Exanet) solution. While it is possible that Dell will find a way to package Exanet as a NAS gateway in front of the iSCSI based EqualLogic PS systems, which would also make for an expensive scale out NAS solution compared to those from other vendors.

Thats it for now.

Lets see how this all plays out.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Technorati tags: Dell

Does IBM Power7 processor announcement signal storage upgrades?

IBM recently announced the Power7 as the latest generation of processors that the company uses in some of its mid range and high end compute servers including the iSeries and pSeries.


IBM Power7 processor wafers (chips)

 

What is the Power7 processor?
The Power7 is the latest generation of IBM processors (chips) that are used as the CPUs in IBM mid range and high end open systems (pSeries) for Unix (AIX) and Linux as well as for the iSeries (aka AS400 successor). Building on previous Power series processors, the Power7 increases the performance per core (CPU) along with the number of cores per socket (chip) footprint. For example, each Power7 chip that plugs into a socket on a processor card in a server can have up to 8 cores or CPUs. Note that sometimes cores are also known as micro CPUs as well as virtual CPUs not to be confused with their presented via Hypervisor abstraction.

Sometimes you may also here the term or phrase 2 way, 4 way (not to be confused with a Cincinnati style 4 way chili) or 8 way among others that refers to the number of cores on a chip. Hence, a dual 2 way would be a pair of processor chips each with 2 cores while a quad 8 way would be 4 processors chips each with 8 cores and so on.


IBM Power7 with up to eight cores per processor (chip)

In addition to faster and more cores in a denser footprint, there are also energy efficiency enhancements including Energy Star for enterprise servers qualification along with intelligent power management (IPM also see here) implementation. IPM is implanted in what IBM refers to as Intelligent Energy technology for turning on or off various parts of the system along with varying processor clock speeds. The benefit is when there is work to be done, get it down quickly or if there is less work, turn some cores off or slow clock speed down. This is similar to what other industry leaders including Intel have deployed with their Nehalem series of processors that also support IPM.

Additional features of the Power7 include (varies by system solutions):

  • Energy Star for server qualified providing enhanced performance and efficiency.
  • IBM Systems Director Express, Standard and Enterprise Editions for simplified management including virtualization capabilities across pools of Power servers as a single entity.
  • PowerVM (Hypervisor) virtualization for AIX, iSeries and Linux operating systems.
  • ActiveMemory enables effective memory capacity to be larger than physical memory, similar to how virtual memory works within many operating systems. The benefit is to enable a partition to have access to more memory which is important for virtual machines along with the ability to support more partitions in a given physical memory footprint.
  • TurboCore and Intelligent Threads enable workload optimization by selecting the applicable mode for the work to be done. For example, single thread per core along with simultaneous threads (2 or 4) modes per core. The trade off is to have more threads per core for concurrent processing, or, fewer threads to boost single stream performance.

IBM has announced several Power7 enabled or based server system models with various numbers of processors and cores along with standalone and clustered configurations including:

IBM Power7 family of server systems

  • Power 750 Express, 4U server with one to four socket server supporting up to 32 cores (3.0 to 3.5 GHz) and 128 threads (4 threads per core), PowerVM (Hypervisor) along with main memory capacity of 512GB or 1TByte of virtual memory using Active Memory Expansion.
  • Power 755, 32 3.3Ghz Power7 cores (8 cores per processor) with memory up to 256GB along with AltiVec and VSX SIMD instruction set support. Up to 64 755 nodes each with 32 cores can be clustered together for high performance applications.
  • Power 770, Up to 64 Power7 cores providing more performance while consuming less energy per core compared to previous Power6 generations. Support for up to 2TB of main memory or RAM using 32GB DIMM when available later in 2010.
  • Power 780, 64 Power7 cores with TurboCore workload optimization providing performance boost per core. With TurboCore, 64 cores can operate at 3.8 GHz, or, enable up to 32 cores at 4.1 GHz and twice the amount of cache when more speed per thread is needed. Support for up to 2TB of main memory or RAM using 32GB DIMM when available later in 2010.

Additional Power7 specifications and details can be found here.

 

What is the DS8000?
The DS8000 is the latest generation of a family of high end enterprise class storage systems supporting IBM mainframe (zSeries), Open systems along with mixed workloads. Being high end open systems or mainframe, the DS8000 competes with similar systems from EMC (Symmetrix/DMX/VMAX), Fujitsu (Eternus DX8000), HDS (Hitachi) and HP (XP series OEM from Hitachi). Previous generations of the DS8000 (aka predecessors) include the ESS (Enterprise Storage System) Model 2105 (aka Shark) and VSS (Versatile Storage Server). Current generation family members include the Power5 based DS8100 and DS8300 along with the Power6 based DS8700.

IBM DS8000 Storage System

Learn more about the DS8000 here, here, here and here.

 

What is the association between the Power7 and DS8000?
Disclosure: Before I go any further, lets be clear on something, what I am about to post on is based entirely on researching, analyzing, correlating (connecting the dots) of what is publicly and freely available from IBM on the Web (e.g. there is no NDA material being disclosed here that I am aware of) along with prior trends and tendency of IBM and their solutions. In other words, you can call it speculation, a prediction, industry analysis perspective, looking into the proverbial crystal ball or educated guess and thus should not be taken as an indicator of what IBM may actually do or be working on. As to what may actually be done or not done, for that you will need to contact one of the IBM truth squad members.

As to what is the linkage between Power7 and the DS8000?

The linkage between the Power7 and the DS8000 is just that, the Power processors!

At the heart of the DS8000 are Power series processors coupled or clustered together in pairs for performance and availability that run IBM developed storage systems software. While the spin doctors may not agree, essentially the DS8000 and its predecessors are based on and around Power series processors clustered together with a high speed interconnect that combine to host an operating system and IBM developed storage system application software.

Thus IBM has been able to for over a decade leverage technology improvement curve advantages with faster processors, increased memory and I/O connectivity in denser footprints while enhancing their storage system application software.

Given that the current DS8000 family members utilize 2 way (2 core) or 4 way (4 core) Power5 and Power6 processors, similar to how their predecessors utilized previous generation Power4, Power3 and so forth processors, it only makes sense that IBM might possibly use a Power7 processor in a future DS8000 (or derivative perhaps even with a different name or model number). Again, this is just based all on historical trends and patterns of IBM storage systems group leveraging the latest generation of Power processors; after all, they are a large customer of the Power systems group.

Consequently it would make sense for IBM storage folks to leverage the new Power7 processors and features similar to how EMC is leveraging Intel processor enhances along with what other vendors are doing.

There is certainly room in the DS8000 architecture for growth in terms of supporting additional nodes or complexes or controllers (or whatever your term preference of choice is for describing a server) each equipped with multiple processors (chips or sockets) that have multiple cores. While IBM has only commercially released two complex or dual server versions of the DS8000 with various numbers of cores per server, they have come nowhere close to their architecture limit of nodes. In fact with this release of Power7, as an example, the model 755 can be clustered via InfiniBand with up to 64 nodes, with each node having 4 sockets (e.g. 4 way) with up to 8 cores each. That means on paper, 64 x 4 x 8 = 2048 cores and each core could have up to 4 threads for concurrency, or half as many cores for more cache performance. Now will IBM ever come out with a 64 node DS8000 on steroids?

Tough to say, maybe possibly some day to play specmanship vs EMC VMAX 256 node architectural limit, however Im not holding my breath just yet. Thus with more and faster cores per processor, ability to increase number of processors per server or node, along with architectural capabilities to boost the number of nodes in an instance or cluster, on paper alone, there is lots of head room for the DS8000 or a future derivative.

What about software and functionality, sure IBM could in theory simply turn the crank and use a new hardware platform that is faster, more capacity, denser, better energy efficiency, however what about new features?

Can IBM enhance its storage systems application software that it evolved from the ESS with new features to leverage underlying hardware capabilities including TurboCore, PowerVM, device and I/O sharing, Intelligent Energy efficiency along with threads enhancements?

Can IBM leverage those and other features to support not only scaling of performance, availability, capacity and energy efficiency in an economical manner, however also add features for advanced automated tiering or data movement plus other popular industry buzzword functionality?

 

Additional thoughts and perspectives
One of the things I find interesting is that some IBM folks along with their channel partners will go to great lengths to explain why and how the DS8000 is not just a pair of Power enabled based servers tightly coupled together. Yet, on the other hand, some of those folks will go to great lengths touting the advantages of leveraging off the shelf or commercial enabled servers based on Intel or AMD based systems such as IBMs own XIV storage solution.

I can understand in the past when the likes of EMC, Hitachi and Fujitsu were all competing with IBM building bigger and more function rich monolithic systems, however that trend is shifting. The trend now as is being seen with EMC and VMAX is to decouple and leverage more off the shelf commercially available technology combined with custom ASICs where and when needed.

Thus at a time where more attention and discussion is around clustered, grid, scalable storage systems, will we see or hear the IBM folks change their tune about the architectural scale up and out capabilities of the Power enabled DS8000 family?

There had been some industry speculation that the DS8000 would be the end of the line if the Power7 had not been released which will now (assuming that IBM leverages the Power7 for storage) shift to if there will be a Power8 or Power9 and so forth.

From a storage perspective, is the DS8K still relevant?

I say yes given its installed base and need for IBM to have an enterprise solution (sorry, IMHO XIV does not fit that bill just yet) of their own, lest they cut an OEM deal with the likes of Hitachi or Fujitsu which while possible, I do not see it as likely near term. Another soft point on its relevance is to gauge reaction from their competitors including EMC and HDS.

From a server perspective, what is the benefit of the new Power7 enabled servers from IBM?

Simple, increase scale of performance for single thread as well as concurrent or parallel application workloads.

In other words, supporting more web sites, partitions for virtual machines and guest operating system instances, databases, compute and other applications that demand performance and economy of scale.

This also means that IBM has a platform to aggressively go after Sun Solaris server customers with a lifeline during the Oracle transition, not to mention being a platform for running Oracle in addition to its own UDB/DB2 database. In addition to being a platform for Unix AIX as well as Linux, the Power7 series also are at the heart of current generation iSeries (the server formerly known as the AS400).

Additional links and resources:

Closing comments (for now):
Given IBMs history of following a Power chip enhancement with a new upgraded version of the DS8000 (or ESS/2105 aka Shark/VSS) and its predecessors by a reasonable amount of time, I would be surprised if we do not see a new DS8000 (perhaps even renamed or renumbered) within the year.

This is similar to how other vendors leverage new processor chip technology evolution to pace their systems upgrades for example how many vendors who leverage Intel processes have done announcements over the past year since the Nehalem series rolled out including EMC among others.

Lets see what the IBM truth squads have to say, or, not have to say :)

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Technology Tiering, Servers Storage and Snow Removal

Granted it is winter in the northern hemisphere and thus snow storms should not be a surprise.

However between December 2009 and early 2010, there has been plenty of record activity from in the U.K. (or here), to the U.S. east coast including New York, Boston and Washington DC, across the midwest and out to California, it made for a white christmas and SANta fun along with snow fun in general in the new year.

2010 Snow Storm via www.star-telegram.com

What does this have to do with Information Factories aka IT resources including public or private clouds, facilities, server, storage, networking along with data management let alone tiering?

What does this have to do with tiered snow removal, or even snow fun?

Simple, different tools are needed for addressing various types of snow from wet and heavy to light powdery or dustings to deep downfalls. Likewise, there are different types of servers, storage, data networks along with operating systems, management tools and even hyper visors to deal with various application needs or requirements.

First, lets look at tiered IT resources (servers, storage, networks, facilities, data protection and hyper visors) to meet various efficiency, optimization and service level needs.

Do you have tiered IT resources?

Let me rephrase that question to do you have different types of servers with various performance, availability, connectivity and software that support various applications and cost levels?

Thus the whole notion of tiered IT resources is to be abe to have different resources that can be aligned to the task at hand in order to meet performance, availability, capacity, energy along with economic along with service level agreement (SLA) requirements.

Computers or servers are targeted for different markets including Small Office Home Office (SOHO), Small Medium Business (SMB), Small Medium Enterprise (SME) and ultra large scale or extreme scaling, including high performance super computing. Servers are also positioned for different price bands and deployment scenarios.

General categories of tiered servers and computers include:

  • Laptops, desktops and workstations
  • Small floor standing towers or rack mounted 1U and 2U servers
  • Medium sizes floor standing towers or larger rack mounted servers
  • Blade Centers and Blade Servers
  • Large size floor standing servers, including mainframes
  • Specialized fault tolerant, rugged and embedded processing or real time servers

Servers have different names email server, database server, application server, web server, and video or file server, network server, security server, backup server or storage server associated with them depending on their use. In each of the previous examples, what defines the type of server is the type of software is being used to deliver a type of service. Sometimes the term appliance will be used for a server; this is indicative of the type of service the combined hardware and software solution are providing. For example, the same physical server running different software could be a general purpose applications server, a database server running for example Oracle, IBM, Microsoft or Teradata among other databases, an email server or a storage server.

This can lead to confusion when looking at servers in that a server may be able to support different types of workloads thus it should be considered a server, storage, networking or application platform. It depends on the type of software being used on the server. If, for example, storage software in the form a clustered and parallel file system is installed on a server to create highly scalable network attached storage (NAS) or cloud based storage service solution, then the server is a storage server. If the server has a general purpose operating system such as Microsoft Windows, Linux or UNIX and a database on it, it is a database server.

While not technically a type of server, some manufacturers use the term tin wrapped software in an attempt to not be classified as an appliance, server or hardware vendor but want their software to be positioned more as a turnkey solution. The idea is to avoid being perceived as a software only solution that requires integration with hardware. The solution is to use off the shelf commercially available general purpose servers with the vendors software technology pre integrated and installed ready for use. Thus, tin wrapped software is a turnkey software solution with some tin, or hardware, wrapped around it.

How about the same with tiered storage?

That is different tiers (Figure 1) of fast high performance disk including RAM or flash based SSD, fast Fibre Channel or SAS disk drives, or high capacity SAS and SATA disk drives along with magnetic tape as well as cloud based backup or archive?

Tiered Storage Resources
Figure 1: Tiered Storage resources

Tiered storage is also sometimes thought of in terms large enterprise class solutions or midrange, entry level, primary, secondary, near line and offline. Not to be forgotten, there are also tiered networks that support various speeds, convergence, multi tenancy and other capabilities from IO Virtualization (IOV) to traditional LAN, SAN, MAN and WANs including 1Gb Ethernet (1GbE), 10GbE up to emerging 40GbE and 100GbE not to mention various Fibre Channel speeds supporting various protocols.

The notion around tiered networks is like with servers and storage to enable aligning the right technology to be used for the task at hand economically while meeting service needs.

Two other common IT resource tiering techniques include facilities and data protection. Tiered facilities can indicate size, availability, resiliency among other characteristics. Likewise, tiered data protection is aligning the applicable technology to support different RTO and RPO requirements for example using synchronous replication where applicable vs. asynchronous time delayed for longer distance combined with snapshots. Other forms of tiered data protection include traditional backups either to disk, tape or cloud.

There is a new emerging form of tiering in many IT environments and that is tiered virtualization or specifically tiered server hyper visors in virtual data centers with similar objectives to having different server, storage, network, data protection or facilities tiers. Instead of an environment running all VMware, Microsoft HyperV or Xen among other hyper visors may be deployed to meet different application service class requirements. For example, VMware may be used for premium features and functionality on some applications, where others that do not need those features along with requiring lower operating costs leverage HyperV or Zen based solutions. Taking the tiering approach a step further, one could also declare tiered databases for example Oracle legacy vs. MySQL or Microsoft SQLserver among other examples.

What about IT clouds, are those different types of resources, or, essentially an extension of existing IT capabilities for example cloud storage being another tier of data storage?

There is another form of tiering, particularly during the winter months in the northern hemisphere where there is an abundance of snow this time of the year. That is, tiered snow management, removal or movement technologies.

What about tiered snow removal?

Well lets get back to that then.

Like IT resources, there are different technologies that can be used for moving, removing, melting or managing snow.

For example, I cant do much about getting ready of snow other than pushing it all down the hill and into the river, something that would take time and lots of fuel, or, I can manage where I put snow piles to be prepared for next storm, plus, to help put it where the piles of snow will melt and help avoid spring flood. Some technologies can be used for relocating snow elsewhere, kind of like archiving data onto different tiers of storage.

Regardless of if snowstorm or IT clouds (public or private), virtual, managed service provider (MSP), hosted or traditional IT data centers, all require physical servers, storage, I/O and data networks along with software including management tools.

Granted not all servers, storage or networking technology let alone software are the same as they address different needs. IT resources including servers, storage, networks, operating systems and even hyper visors for virtual machines are often categorized and aligned to different tiers corresponding to needs and characteristics (Figure 2).

Tiered IT Resources
Figure 2: Tiered IT resources

For example, in figure 3 there is a light weight plastic shovel (Shove 1) for moving small amounts of snow in a wide stripe or pass. Then there is a narrow shovel for digging things out, or breaking up snow piles (Shovel 2). Also shown are a light duty snow blower (snow thrower) capable of dealing with powdery or non wet snow, grooming in tight corners or small areas.

Tiered Snow tools
Figure 3: Tiered Snow management and migration tools

For other light dustings, a yard leaf blower does double duty for migrating or moving snow in small or tight corners such as decks, patios or for cleanup. Larger snowfalls, or, where there is a lot of area to clear involves heavier duty tools such as the Kawasaki mule with 5 foot curtis plow. The mule is a multifunction, multi protocol tool capable of being used for hauling items, towing, pulling or recreational tasks.

When all else fails, there is a pickup truck to get or go out and about, not to mention to pull other vehicles out of ditches or piles of snow when they become stuck!

Snow movement
Figure 4: Sometimes the snow light making for fast, low latency migration

Snow movement
Figure 5: And sometimes even snow migration technology goes off line!

Snow movement

And that is it for now!

Enjoy the northern hemisphere winter and snow while it lasts, make the best of it with the right tools to simplify the tasks of movement and management, similar to IT resources.

Keep in mind, its about the tools and when along with how to use them for various tasks for efficiency and effectiveness, and, a bit of snow fun.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

California Center for Sustainable Energy (CCSE)



CCSE Facility and Seminar Series

This past week I had the honor of delivering a keynote presentation in San Diego at the California Center for Sustainable Energy (CCSE) as part of their continuing education and community outreach and education, workshop and seminar series. The theme of the well attended event was Next Generation Data Center Solutions of which my talk centered around leveraging Green and Virtual Data Centers for enabling efficiencey and effectiveness. In addition to my keynote, included a panel discussion that I moderated with representatives of the events sponsor Compucom, along with their special guests APC, HP, Intel and VMware.

The CCSE has a focus around Climate Change, Energy Efficienecey, Green Buildings, Renewable Energy, Transportation, Home and Business. Their services and focus includes awareness and outreach, education programs, library and tools, consultant and associated services. Speaking of their library, there is even a signed copy of my book The Green and Virtual Data Center (CRC) now at the CCSE library that can be checked out along with their other resources.

The CCSE staff and facilities were fantastic with hosts Mike Bigelow (an energy engineer) and Marlene King (program manager) orchestrating a great event.

If you are in the San Diego area, check out the CCSE located at 8690 Balboa Ave., Suite 100. They have a great library, cool demonstrations and tools that you can check out to assist with optimization IT data centers from an energy efficicinecy standpoint. Learn more about the CCSE here.

Following are some relevant links to the keynote along with panel discussion from the CCSE event:

Follow these links to view additional videos or podcasts, tips, articles, books, reports and events.

Cheers
gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Technorati tags: Trends

Infosmack Episode 34, VMware, Microsoft and More

Following on the heals of several guest appearances late in 2009 ( here, here, here and here) on the Storage Monkeys Infosmack weekly pod cast, I was recently asked to join them again for the inaugural 2010 show (Episode 34).

Along with VMguru Rich Brambley and hosts Greg Knieriemen and Marc Farley we discussed several recent industry topics in this first show of the year which can be accessed here or on iTunes.

Heres a link to the pod cast where you can listen to the discussion including VMware Go, VMware buying Zimbra, Vendor Alliances such as HP and Microsoft HyperV and EMC+Cisco+VMware, along with data protection for virtual servers issues options (or opportunities) among other topics.

I have included the following links that pertain to some of the items we discussed during the show.

Enjoy the show.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved