New family of Intel Xeon Scalable Processors enable software defined data infrastructures (SDDI) and SDDC

Intel Xeon Scalable Processors SDDI and SDDC

server storage I/O data infrastructure trends

Today Intel announced a new family of Xeon Scalable Processors (aka Purely) that for some workloads Intel claims to be on average of 1.65x faster than their predecessors. Note your real improvement will vary based on workload, configuration, benchmark testing, type of processor, memory, and many other server storage I/O performance considerations.

Intel Scalable Xeon Processors
Image via Intel.com

In general the new Intel Xeon Scalable Processors enable legacy and software defined data infrastructures (SDDI), along with software defined data centers (SDDC), cloud and other environments to support expanding workloads more efficiently as well as effectively (e.g. boosting productivity).

Data Infrastructures and workloads

Some target application and environment workloads Intel is positioning these new processors for includes among others:

  • Machine Learning (ML), Artificial Intelligence (AI), advanced analytics, deep learning and big data
  • Networking including software defined network (SDN) and network function virtualization (NFV)
  • Cloud and Virtualization including Azure Stack, Docker and Kubernetes containers, Hyper-V, KVM, OpenStack VMware vSphere, KVM among others
  • High Performance Compute (HPC) and High Productivity Compute (e.g. the other HPC)
  • Storage including legacy and emerging software defined storage software deployed as appliances, systems or server less deployment modes.

Features of the new Intel Xeon Scalable Processors include:

  • New core micro architecture with interconnects and on die memory controllers
  • Sockets (processors) scalable up to 28 cores
  • Improved networking performance using Quick Assist and Data Plane Development Kit (DPDK)
  • Leverages Intel Quick Assist Technology for CPU offload of compute intensive functions including I/O networking, security, AI, ML, big data, analytics and storage functions. Functions that benefit from Quick Assist include cryptography, encryption, authentication, cipher operations, digital signatures, key exchange, loss less data compression and data footprint reduction along with data at rest encryption (DARE).
  • Optane Non-Volatile Dual Inline Memory Module (NVDIMM) for storage class memory (SCM) also referred to by some as Persistent Memory (PM), not to be confused with Physical Machine (PM).
  • Supports Advanced Vector Extensions 512  (AVX-512) for HPC and other workloads
  • Optional Omni-Path Fabrics in addition to 1/10Gb Ethernet among other I/O options
  • Six memory channels supporting up to 6TB of RDIMM with multi socket systems
  • From two to eight  sockets per node (system)
  • Systems support PCIe 3.x (some supporting x4 based M.2 interconnects)

Note that exact speeds, feeds, slots and watts will vary by specific server model and vendor options. Also note that some server system solutions have two or more nodes (e.g. two or more real servers) in a single package not to be confused with two or more sockets per node (system or motherboard). Refer to the where to learn more section below for links to Intel benchmarks and other resources.

Software Defined Data Infrastructures, SDDC, SDX and SDDI

What About Speeds and Feeds

Watch for and check out the various Intel partners who have or will be announcing their new server compute platforms based on Intel Xeon Scalable Processors. Each of the different vendors will have various speeds and feeds options that build on the fundamental Intel Xeon Scalable Processor capabilities.

For example Dell EMC announced their 14G server platforms at the May 2017 Dell EMC World event with details to follow (e.g. after the Intel announcements).

Some things to keep in mind include the amount of DDR4 DRAM (or Optane NVDIMM) will vary by vendors server platform configuration, motherboards, several sockets and DIMM slots. Also keep in mind the differences between registered (e.g. buffered RDIMM) that give good capacity and great performance, and load reduced DIMM (LRDIMM) that have great capacity and ok performance.

Various nvme options

What about NVMe

It’s there as these systems like previous Intel models support NVMe devices via PCIe 3.x slots, and some vendor solutions also supporting M.2 x4 physical interconnects as well.

server storageIO flash and SSD
Image via Software Defined Data Infrastructure Essentials (CRC)

Note that Broadcom formerly known as Avago and LSI recently announced PCIe based RAID and adapter cards that support NVMe attached devices in addition to SAS and SATA.

server storage data infrastructure sddi

What About Intel and Storage

In case you have not connected the dots yet, the Intel Xeon Scalable Processor based server (aka compute) systems are also a fundamental platform for storage systems, services, solutions, appliances along with tin-wrapped software.

What this means is that the Intel Xeon Scalable Processors based systems can be used for deploying legacy as well as new and emerging software-defined storage software solutions. This also means that the Intel platforms can be used to support SDDC, SDDI, SDX, SDI as well as other forms of legacy and software-defined data infrastructures along with cloud, virtual, container, server less among other modes of deployment.

Image Via Intel.com

Moving beyond server and compute platforms, there is another tie to storage as part of this recent as well as other Intel announcements. Just a few weeks ago Intel announced 64 layer triple level cell (TLC) 3D NAND solutions positioned for the client market (laptop, workstations, tablets, thin clients). Intel with that announcement increased the traditional aerial density (e.g. bits per square inch or cm) as well as boosting the number of layers (stacking more bits as well).

The net result is not only more bits per square inch, also more per cubic inch or cm. This is all part of a continued evolution of NAND flash including from 2D to 3D, MCL to TLC, 32 to 64 layer.  In other words, NAND flash-based Solid State Devices (SSDs) are very much still a relevant and continue to be enhanced technology even with the emerging 3D XPoint and Optane (also available via Amazon in M.2) in the wings.

server memory evolution
Via Intel and Micron (3D XPoint launch)

Keep in mind that NAND flash-based technologies were announced almost 20 years ago (1999), and are still evolving. 3D XPoint announced two years ago, along with other emerging storage class memories (SCM), non-volatile memory (NVM) and persistent memory (PM) devices are part of the future as is 3D NAND (among others). Speaking of 3D XPoint and Optane, Intel had announcements about that in the past as well.

Where To Learn More

Learn more about Intel Xeon Scalable Processors along with related technology, trends, tools, techniques and tips with the following links.

What This All Means

Some say the PC is dead and IMHO that depends on what you mean or define a PC as. For example if you refer to a PC generically to also include servers besides workstations or other devices, then they are alive. If however your view is that PCs are only workstations and client devices, then they are on the decline.

However if your view is that a PC is defined by the underlying processor such as Intel general purpose 64 bit x86 derivative (or descendent) then they are very much alive. Just as older generations of PCs leveraging general purpose Intel based x86 (and its predecessors) processors were deployed for many uses, so to are today’s line of Xeon (among others) processors.

Even with the increase of ARM, GPU and other specialized processors, as well as ASIC and FPGAs for offloads, the role of general purpose processors continues to increase, as does the technology evolution around. Even with so called server less architectures, they still need underlying compute server platforms for running software, which also includes software defined storage, software defined networks, SDDC, SDDI, SDX, IoT among others.

Overall this is a good set of announcements by Intel and what we can also expect to be a flood of enhancements from their partners who will use the new family of Intel Xeon Scalable Processors in their products to enable software defined data infrastructures (SDDI) and SDDC.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Green IT deferral blamed on economic recession might be result of green gap

Storage I/O Industry Trends and Perspectives

I recently saw a comment somewhere that talked about Green IT being deferred or set aside due to lack of funding because of ongoing global economic turmoil. For those who see Green IT in the context of the green washing efforts that requiring spending to gain some benefits that I can understand. After all, if your goal is to simply go and be or be seen as being green, there is a cost to doing that.

With tight or shrinking IT budgets, there are other realities and while organizations may want to do the right thing helping the environment, however that is often seen as overhead to financial conscious management.

On the other hand, turn the green washing messaging off or at least dial-it back a bit as has been the case the past couple of years.

Expand the Green IT discussion or change it around a bit from that of being seen or perceived as being green by energy efficiency or avoidance to that of effectiveness, enhanced productivity, doing more with what you have or with less and there is a different opportunity.

That opportunity is to meet the financial and business goals or requirements that as a by-product help the environment. In other words, expand the focus of Green IT to that of economics and improving on resource effectiveness and the environment gets a free ride, or, Green gets self-funded.

The Green and Virtual Data Center Book addressing optimization, effectivness, productivity and economics

The challenge is what I refer to as the Green Gap, which is the disconnect between what is talked about (e.g. messaging) and thus perceived to be Green IT and where common IT opportunities exist (or missed opportunities have occurred).

Green IT or at least the tenants of driving efficiency and effectiveness to use energy more effectively, address recycling and waste, removable of hazardous substance and other items continues to thrive. However, the green washing is subsiding and overtime organizations will not be as dismissive of Green IT in the context of improving productivity, reducing complexity and costs, optimization and related themes tied to economics where the environment gets a free ride.

Here are some related links:
Closing the Green Gap
Energy efficient technology sales depend on the pitch
EPA Energy Star for Data Center Storage Update
Green IT Confusion Continues, Opportunities Missed!
How to reduce your Data Footprint impact (Podcast)
Optimizing storage capacity and performance to reduce your data footprint
Performance metrics: Evaluating your data storage efficiency
PUE, Are you Managing Power, Energy or Productivity?
Saving Money with Green Data Storage Technology
Saving Money with Green IT: Time To Invest In Information Factories
Shifting from energy avoidance to energy efficiency
Storage Efficiency and Optimization: The Other Green
Supporting IT growth demand during economic uncertain times
The new Green IT: Efficient, Effective, Smart and Productive
The other Green Storage: Efficiency and Optimization
The Green and Virtual Data Center Book (CRC Press, Intel Recommended Reading)

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

The new Green IT: Efficient, Effective, Smart and Productive

Given the buzz about big data and conversations or confusion around clouds along with virtualizing virtually anything possible, Green IT has fallen off the Buzzword Bingo Bandwagon.

Green IT like so many other buzzwords and trends typically go through a hype cycle before getting tired, worn out, or disillusioned (see here and here). Often these buzzwords will go to Some Day Isle for some rest and recuperation before reappearing later as part of a second or third buzzword wave either making it to broad adoption which means the plateau of profitability (for vendors or vars) and productivity (for customers) or disappearing.

Some Day Isle for those not familiar with it is a visional or fictional place that some day you will go to, a wishful happy place so to speak that is perfect for hyperbole R and R. After some R and R, these trends, technologies or techniques often reappear well rested and ready for the next wave of buzz, FUD, hype and activity.

Keep in mind that industry adoption (e.g. everybody is talking about it) can differ from industry deployment (e.g. some people have actually paid for, deployed and using the technology) to broad customer adoption (e.g. many people are actually paying for, deploying and using the technology on a routine basis).

Confusion still reigns around Green IT not surprising given the heavy dose of Green Washing that has occurred.

Consequently Green IT themes or pitches often fall on deaf ears as people have either become numb or ignore the Green washing hype or FUD. For example many people will skip reading this post because the word Green is in the title assuming that it is another CO2 or related themed piece missing out on the other themes or messages here. Unfortunately as I have discussed in the past, there remains a Green Gap that results in missed opportunities for vendors, vars, service providers, IT organizations along with those who would like to see environmental benefits or change.

Another example of a Green gap is messaging around energy avoidance as being efficient vs. using energy in a more productive or effective manner (doing more work with the same or fewer resources) shown in the figure below.

Tiered Storage
Expanding focus from energy avoidance to energy usage effectiveness

In routine conversations with IT professionals it is clear that the Green Gap and thus missed opportunities will continue for some time until the business and economic values of efficient, effective, smart and productive IT are understood to have environmental benefits as a by product and thus being Green. Watch for more missed messaging around CO2 and related themes popular with so called Greenies (or if you prefer environmentalists) that miss the mark with most business and IT organizations.

Business and thus IT are driven by economics and as such will invest where they can reduce complexity and costs, become more efficient and effective while increasing productivity and reducing waste by working smarter. In other words, by changing how information services are delivered in a smarter more effective efficient manner maximizes what resources are used enabling more to be done in a denser footprint (budget, people staffing, management, power, cooling, floor space) that have positive environmental benefits. Put another way, a benefit for IT organizations to remove complexity results in lower costs, by becoming more efficient and effective reducing waste results in better productivity and fewer missed opportunities meaning enhanced profits. The net result is that environmental concerns get a free ride or being funded as a result of IT organizations improving their productivity which of course should have a business benefit.

Efficient and Optimized IT Wheel of Oppourtunity
Wheel of Opportunity: Various techniques and technologies for infrastructure optimization

Efficient and effective IT (aka the other Green IT) that links to common technology and business issues with the benefit of helping the environment can be accomplished using a combination approaches. The approaches for enabling an efficient, effective, smarter and productive IT environment includes from a generic perspective various technologies, techniques and best practices shown in the wheel of opportunity figure.

For example:

Here are some related links for additional reading:

Also check out my book for enabling efficient, effective and smart IT The Green and Virtual Data Center (CRC) including a free sample chapter download here.

Ok, nuff said for now, go hug a tree, your computer, hybrid car, droid, ipad or whatever suits your needs.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC), The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Green IT goes mainstream: What about data storage environments?

I recently did an interview with the folks over at Infortrend (a RAID storage company) discussing various industry trends and perspectives including RAID, data footprint reduction (DFR) as well as Green IT including how the Green Gap.

The Green Gap is the disconnect between common messaging around carbon and environment vs. IT and business productivity sustainment challenges that continues to result in confusion along with missed opportunities.

  • There is no such thing as a data or information recession
  • Organizations of all size will continue to have to support growth in a denser fashion
  • Doing more in a denser manner also means acquiring as well as managing more usable IT resources per dollar spent
  • Optimization and data footprint reduction (DFR) expands focus from reduction efficiency to productivity effectiveness
  • Energy efficiency shifts from avoidance to energy effectiveness where more work is done to support business productivity and sustainment
  • RAID is alive however it continues to evolve as well as leveraged in conjunction with other techniques

Here is the link to the first of a two part series where you can read my comments on how many organizations are missing out on economic as well as business sustainability benefits due to confusion and the Green Gap among other topics.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Back to school shopping: Dude, Dell Digests 3PAR Disk storage

Dell

No sooner has the dust settled from Dells other recent acquisitions, its back to school shopping time and the latest bargain for the Round Rock Texas folks is bay (San Francisco) area storage vendor 3PAR for $1.15B. As a refresh, some of Dells more recent acquisitions including a few years ago $1.4B for EqualLogic, $3.9B for Perot systems not to mention Exanet, Kace and Ocarina earlier this year. For those interested, as of April 2010 reporting figures found here, Dell showed about $10B USD in cash and here is financial information on publicly held 3PAR (PAR).

Who is 3PAR
3PAR is a publicly traded company (PAR) that makes a scalable or clustered storage system with many built in advanced features typically associated with high end EMC DMX and VMAX as well as CLARiiON, in addition to Hitachi or HP or IBM enterprise class solutions. The Inserv (3PARs storage solution) combines hardware and software providing a very scalable solution that can be configured for smaller environments or larger enterprise by varying the number of controllers or processing nodes, connectivity (server attachment) ports, cache and disk drives.

Unlike EqualLogic which is more of a mid market iSCSI only storage system, the 3PAR Inserv is capable of going head to head with the EMC CLARiiON as well as DMC or VMAX systems that support a mix of iSCSI and Fibre Channel or NAS via gateway or appliances. Thus while there were occasional competitive situations between 3PAR and Dell EqualLogic, they for the most part were targeted at different market sectors or customers deployment scenarios.

What does Dell get with 3PAR?

  • A good deal if not a bargain on one of the last new storage startup pure plays
  • A public company that is actually generating revenue with a large and growing installed base
  • A seasoned sales force who knows how to sell into the enterprise storage space against EMC, HP, IBM, Oracle/SUN, Netapp and others
  • A solution that can scale in terms of functionality, connectivity, performance, availability, capacity and energy efficiency (PACE)
  • Potential route to new markets where 3PAR has had success, or to bridge gaps where both have played and competed in the past
  • Did I say a company with an established footprint of installed 3PAR Inserv storage systems and good list of marquee customers
  • Ability to sell a solution that they own the intellectual property (IP) instead of that of partner EMC
  • Plenty of IP that can be leveraged within other Dell solutions, not to mention combine 3PAR with other recently acquired technologies or companies.

On a lighter note, Dell picks up once again Marc Farley who was with them briefly after the EqualLogic acquisition who then departed to 3PAR where he became director of social media including launch of Infosmack on Storage Monkeys with co host Greg Knieriemen (@Knieriemen). Of course the twitter world and traditional coconut wires are now speculating where Farley will go next that Dell may end up buying in the future.

What does this mean for Dell and their data storage portfolio?
While in no ways all inclusive or comprehensive, table 1 provides a rough framework of different price bands, categories, tiers and market or application segments requiring various types of storage solutions where Dell can sell into.

 

HP

Dell

EMC

IBM

Oracle/Sun

Servers

Blade systems, rack mount, towers to desktop

Blade systems, rack mount, towers to desktop

Virtual servers with VMware, servers via vBlock servers via Cisco

Blade systems, rack mount, towers to desktop

Blade systems, rack mount, towers to desktop

Services

HP managed services, consulting and hosting supplemented by EDS acquisition

Bought Perot systems (an EDS spin off/out)

Partnered with various organizations and services

Has been doing smaller acquisitions adding tools and capabilities to IBM global services

Large internal consulting and services as well as Software as a Service (SaaS) hosting, partnered with others

Enterprise storage

XP (FC, iSCSI, FICON for mainframe and NAS with gateway) which is OEMed from Hitachi Japan parent of HDS

3PAR (iSCSI and FICON or NAS with gateway) replaces EMC CLARiiON or perhaps rare DMX/VMAX at high end?

DMX and VMAX

DS8000

Sun resold HDS version of XP/USP however Oracle has since dropped it from lineup

Data footprint impact reduction

Dedupe on VTL via Sepaton plus HP developed technology or OEMed products

Dedupe in OEM or partner software or hardware solutions, recently acquired Ocarina

Dedupe in Avamar, Datadomain, Networker, Celerra, Centera, Atmos. CLARiiON and Celerra compression

Dedupe in various hardware and software solutions, source and target, compression with Storwize

Dedupe via OEM VTLs and other sun solutions

Data preservation

Database and other archive tools, archive storage

OEM solutions from EMC and others

Centera and other solutions

Various hardware and software solutions

Various hardware and software solutions

General data protection (excluding logical or physical security and DLP)

Internal Data Protector software plus OEM, partners with other software, various VTL, TL and target solutions as well as services

OEM and resell partner tools as well as Dell target devices and those of partners. Could this be a future acquisition target area?

Networker and Avamar software, Datadomain and other targets, DPA management tools and Mozy services

Tivoli suite of software and various hardware targets, management tools and cloud services

Various software and partners tools, tape libraries, VTLs and online storage solutions

Scale out, bulk, or clustered NAS

eXtreme scale out, bulk and clustered storage for unstructured data applications

Exanet on Dell servers with shared SAS, iSCSI or FC storage

Celerra and ATMOS

IBM SONAS or N series (OEM from NetApp)

ZFS based solutions including 7000 series

General purpose NAS

Various gateways for EVA or MSA or XP, HP IBRIX or Polyserve based as well as Microsoft WSS solutions

EMC Celerra, Dell Exanet, Microsoft WSS based. Acquisition or partner target area?

Celerra

N Series OEMed from Netapp as well as growing awareness of SONAS

ZFS based solutions. Whatever happened to Procom?

Mid market multi protocol block

EVA (FC with iSCSI or NAS gateways), LeftHand (P Series iSCSI) for lowered of this market

3PAR (FC and iSCSI, NAS with gateway) for mid to upper end of this market, EqualLogic (iSCSI) for the lower end of the market, some residual EMC CX activity phases out over time?

CLARiiON (FC and iSCSI with NAS via gateway), Some smaller DMX or VMAX configurations for mid to upper end of this market

DS5000, DS4000 (FC and iSCSI with NAS via a gateway) both OEMed from LSI, XIV and N series (Netapp)

7000 series (ZFS and Sun storage software running on Sun server with internal storage, optional external storage)

6000 series

Scalable SMB iSCSI

LeftHand (P Series)

EqualLogic

Celerra NX, CLARiiON AX/CX

XIV, DS3000, N Series

2000
7000

Entry level shared block

MSA2000 (iSCSI, FC, SAS)

MD3000 (iSCSI, FC, SAS)

AX (iSCSI, FC)

DS3000 (iSCSI, FC, SAS), N Series (iSCSI, FC, NAS)

2000
7000

Entry level unified multi function

X (not to be confused with eXtreme series) HP servers with Windows Storage Software

Dell servers with Windows Storage Software or EMC Celerra

Celerra NX, Iomega

xSeries servers with Microsoft or other software installed

ZFS based solutions running on Sun servers

Low end SOHO

X (not to be confused with eXtreme series) HP servers with Windows Storage Software

Dell servers with storage and Windows Storage Software. Future acqustion area perhaps?

Iomega

 

 

Table 1: Sampling of various tiers, architectures, functionality and storage solution options

Clarifying some of the above categories in table 1:

Servers: Application servers or computers running Windows, Linux, HyperV, VMware or other applications, operating systems and hypervisors.

Services: Professional and consulting services, installation, break fix repair, call center, hosting, managed services or cloud solutions

Enterprise storage: Large scale (hundreds to thousands of drives, many front end as well as back ports, multiple controllers or storage processing engines (nodes), large amount of cache and equally strong performance, feature rich functionality, resilient and scalable.

Data footprint impact reduction: Archive, data management, compression, dedupe, thin provision among other techniques. Read more here and here.

Data preservation: Archiving for compliance and non regulatory applications or data including software, hardware, services.

General data protection: Excluding physical or logical data security (firewalls, dlp, etc), this would be backup/restore with encryption, replication, snapshots, hardware and software to support BC, DR and normal business operations. Read more about data protection options for virtual and physical storage here.

Scale out NAS: Clustered NAS, bulk unstructured storage, cloud storage system or file system. Read more about clustered storage here. HP has their eXtreme X series of scale out and bulk storage systems as well as gateways. These leverage IBRIX and Polyserve which were bought by HP as software, or as a solution (HP servers, storage and software), perhaps with optional data reduction software such as Ocarina OEMed by Dell. Dell now has Exanet which they bought recently as software, or as a solution running on Dell servers, with either SAS, iSCSI or FC back end storage plus optional data footprint reduction software such as Ocarina. IBM has GPFS as a software solution running on IBM or other vendors servers with attached storage, or as a solution such as SONAS with IBM servers running software with IBM DS mid range storage. IBM also OEMs Netapp as the N series.

General purpose NAS: NAS (NFS and CIFS or optional AFP and pNFS) for everyday enterprise (or SME/SMB) file serving and sharing

Mid market multi protocol block: For SMB to SME environments that need scalable shared (SAN) scalable block storage using iSCSI, FC or FCoE

Scalable SMB iSCSI: For SMB to SME environments that need scalable iSCSI storage with feature rich functionality including built in virtualization

Entry level shared block: Block storage with flexibility to support iSCSI, SAS or Fibre Channel with optional NAS support built in or available via a gateway. For example external SAS RAID shared storage between 2 or more servers configured in a HyeprV or VMware clustered that do not need or can afford higher cost of iSCSI. Another example would be shared SAS (or iSCSI or Fibre Channel) storage attached to a server running storage software such as clustered file system (e.g. Exanet) or VTL, Dedupe, Backup, Archiving or data footprint reduction tools or perhaps database software where higher cost or complexity of an iSCSI or Fibre Channel SAN is not needed. Read more about external shared SAS here.

Entry level unified multifunction: This is storage that can do block and file yet is scaled down to meet ease of acquisition, ease of sale, channel friendly, simplified deployment and installation yet affordable for SMBs or larger SOHOs as well as ROBOs.

Low end SOHO: Storage that can scale down to consumer, prosumer or lower end of SMB (e.g. SOHO) providing mix of block and file, yet priced and positioned below higher price multifunction systems.

Wait a minute, are that too many different categories or types of storage?

Perhaps, however it also enables multiple tools (tiers of technologies) to be in a vendors tool box, or, in an IT professionals tool bin to address different challenges. Lets come back to this in a few moments.

 

Some Industry trends and perspectives (ITP) thoughts:

How can Dell with 3PAR be an enterprise play without IBM mainframe FICON support?
Some would say forget about it, mainframes are dead thus not a Dell objective even though EMC, HDS and IBM sell a ton of storage into those environments. However, fair enough argument and one that 3PAR has faced for years while competing with EMC, HDS, HP, IBM and Fujitsu thus they are versed in how to handle that discussion. Thus the 3PAR teams can help the Dell folks determine where to hunt and farm for business something that many of the Dell folks already know how to do. After all, today they have to flip the business to EMC or worse.

If truly pressured and in need, Dell could continue reference sales with EMC for DMX and VMAX. Likewise they could also go to Bustech and/or Luminex who have open systems to mainframe gateways (including VTL support) under a custom or special solution sale. Ironically EMC has OEMed in the past Bustech to transform their high end storage into Mainframe VTLs (not to be confused with Falconstor or Quantum for open system) as well as Datadomain partnered with Luminex.

BTW, did you know that Dell has had for several years a group or team that handles specialized storage solutions addressing needs outside the usual product portfolio?

Thus IMHO Dells enterprise class focus will be that for open systems large scale out where they will compete with EMC DMX and VMAX, HDS USP or their soon to be announced enhancements, HP and their Hitachi Japan OEMed XP, IBM and the DS8000 as well as the seldom heard about yet equally scalable Fujitsu Eternus systems.

 

Why only 1.15B, after all they paid 1.4B for EqualLogic?
IMHO, had this deal occurred a couple of years ago when some valuations were still flying higher than today, and 3PAR were at their current sales run rate, customer deployment situations, it is possible the amount would have been higher, either way, this is still a great value for both Dell and 3PAR investors, customers, employees and partners.

 

Does this mean Dell dumps EMC?
Near term I do not think Dell dumps the EMC dudes (or dudettes) as there is still plenty of business in the mid market for the two companies. However, over time, I would expect that Dell will unleash the 3PAR folks into the space where normally a CLARiiON CX would have been positioned such as deals just above where EqualLogic plays, or where Fibre Channel is preferred. Likewise, I would expect Dell to empower the 3PAR team to go after additional higher end deals where a DMX or VMAX would have been the previous option not to mention where 3PAR has had success.

This would also mean extending into sales against HP EVA and XPs, IBM DS5000 and DS8000 as well as XIV, Oracle/Sun 6000 and 7000s to name a few. In other words there will be some spin around coopition, however longer term you can read the writing on the wall. Oh, btw, lest you forget, Dell is first and foremost a server company who now is getting into storage in a much bigger way and EMC is first and foremost a storage company who is getting into severs via VMware as well as their Cisco partnerships.

Are shots being fired across each other bows? I will leave that up to you to speculate.

 

Does this mean Dell MD1000/MD3000 iSCSI, SAS and FC disappears?
I do not think so as they have had a specific role for entry level below where the EqualLogic iSCSI only solution fits providing mixed iSCSI, SAS and Fibre Channel capabilities to compete with the HP MSA2000 (OEMed by Dothill) and IBM DS3000 (OEMed from LSI). While 3PAR could be taken down into some of these markets, which would also potentially dilute the brand and thus premium margin of those solutions.

Likewise, there is a play with server vendors to attach shared SAS external storage to small 2 and 4 node clusters for VMware, HyperV, Exchange, SQL, SharePoint and other applications where iSCSI or Fibre Channel are to expensive or not needed or where NAS is not a fit. Another play for the shared external SAS attached is for attaching low cost storage to scale out clustered NAS or bulk storage where software such as Exanet runs on a Dell server. Take a closer look at how HP is supporting their scale out as well as IBM and Oracle among others. Sure you can find iSCSI or Fibre Channel or even NAS back end to file servers. However growing trend of using shared SAS.

 

Does Dell now have too many different storage systems and solutions in their portfolio?
Possibly depending upon how you look at it and certainly the potential is there for revenue prevention teams to get in the way of each other instead of competing with external competitors. However if you compare the Dell lineup with those of EMC, HP, IBM and Oracle/Sun among others, it is not all that different. Note that HP, IBM and Oracle also have something in common with Dell in that they are general IT resource providers (servers, storage, networks, services, hardware and software) as compared to other traditional storage vendors.

Consequently if you look at these vendors in terms of their different markets from consumer to prosumer to SOHO at the low end of the SMB to SME that sits between SMB and enterprise, they have diverse customer needs. Likewise, if you look at these vendors server offerings, they too are diverse ranging from desktops to floor standing towers to racks, high density racks and blade servers that also need various tiers, architectures, price bands and purposed storage functionality.

 

What will be key for Dell to make this all work?
The key for Dell will be similar to that of their competitors which is to clearly communicate the value proposition of the various products or solutions, where, who and what their target markets are and then execute on those plans. There will be overlap and conflict despite the best spin as is always the case with diverse portfolios by vendors.

However if Dell can keep their teams focused on expanding their customer footprints at the expense of their external competition vs. cannibalizing their own internal product lines, not to mention creating or extending into new markets or applications. Consequently Dell now has many tools in their tool box and thus need to educate their solution teams on what to use or sell when, where, why and how instead of just having one tool or a singular focus. In other words, while a great solution, Dell no longer has to respond with the solution to everything is iSCSI based EqualLogic.

Likewise Dell can leverage the same emotion and momentum behind the EqualLogic teams to invigorate and unleash the best with 3PAR teams and solution into or onto the higher end of the SMB, SME and enterprise environments.

Im still thinking that Exanet is a diamond in the rough for Dell where they can install the clustered scalable NAS software onto their servers and use either lower end shared SAS RAID (e.g. MD3000), or iSCSI (MD3000, EqualLogic or 3PAR) or higher end Fibre Channel with 3PAR) for scale out, cloud and other bulk solutions competing with HP, Oracle and IBM. Dell still has the Windows based storage server for entry level multi protocol block and file capabilities as well as what they OEM from EMC.

 

Is Dell done shopping?
IMHO I do not think so as there are still areas where Dell can extend their portfolio and not just in storage. Likewise there are still some opportunities or perhaps bargains out there for fall and beyond acquisitions.

 

Does this mean that Dell is not happy with EqualLogic and iSCSI
Simply put from my perspective talking with Dell customers, prospects, and partners and seeing them all in action nothing could be further from Dell not being happy with iSCSI or EqualLogic. Look at this as being a way to extend the Dell story and capabilities into new markets, granted the EqualLogic folks now have a new sibling to compete with internal marketing and management for love and attention.

 

Isnt Dell just an iSCSI focused company?
A couple of years I was quoted in one of the financial analysis reports as saying that Dell needed to remain open to various forms of storage instead of becoming singularly focused on just iSCSI as a result of the EqualLogic deal. I standby that statement in that Dell to be a strong enterprise contender needs to have a balanced portfolio across different price or market bands, from block to file, from shared SAS to iSCSI to Fibre Channel and emerging FCoE.

This also means supporting traditional NAS across those different price band or market sectors as well as support for emerging and fast growing unstructured data markets where there is a need for scale out and bulk storage. Thus it is great to see Dell remaining open minded and not becoming singularly focused on just iSCSI instead providing the right solution to meet their diverse customer as well as prospect needs or opportunities.

While EqualLogic was and is a very successfully iSCSI focused storage solution not to mention one that Dell continues to leverage, Dell is more than just iSCSI. Take a look at Dells current storage line up as well as up in table 1 and there is a lot of existing diversity. Granted some of that current diversity is via partners which the 3PAR deal helps to address. What this means is that iSCSI continues to grow in popularity however there are other needs where shared SAS or Fibre Channel or FCoE will be needed opening new markets to Dell.

 

Bottom line and wrap up (for now)
This is a great move for Dell (as well as 3PAR) to move up market in the storage space with less reliance on EMC. Assuming that Dell can communicate the what to use when, where, why and how to both their internal teams, partners as well as industry and customers not to mention then execute on, they should have themselves a winner.

Will this deal end up being an even better bargain than when Dell paid $1.4B for EqualLogic?

Not sure yet, it certainly has potential if Dell can execute on their plans without losing momentum in any other their other areas (products).

Whats your take?

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Here are some related links to read more

Happy Earth Day 2010!

Here in the northern hemisphere it is late April and thus mid spring time.

That means the trees sprouting their buds, leaves and flowering while other plants and things come to life.

In Minnesota where I live, there is not a cloud in the sky today, the sun is out and its going to be another warm day in the 60s, a nice day to not be flying or traveling and thus enjoy the fine weather.

Among other things of note on this earth day 2010 include:

  • Minnesota Twins new home Target Field was just named the most Green Major League Baseball (MLB) stadium as well as greenest in the US with its LEED (or see here) certification.
  • Icelands Eyjafjallajokull volcano continues to spew water vapor steam, CO2 and ash at a slower rate than last week when it first erupted with some speculating that there could be impending activity from other Icelandic volcanos. Some estimates placed the initial eruption CO2 impact and subsequent flight cancellations to be neutral, essentially canceling each other out, however Im sure we will be hearing many different stories in the weeks to come.

  • Image of Iceland Eyjafjallajokull Volcano Eruption via Boston.com

  • Flights to/from and within Europe and the UK are returning to normal
  • Toyota continues to deal with recalls on some of their US built automobiles including the energy efficient Prius, some of which may have been purchased during the recent US cash for clunkers (CFC) program (hmm, is that ironic or what?)
  • Greenpeace in addition to using a Facebook page to protest Facebook data center practices is now targeting cloud IT in general including just before the Apple iPad launch (Heres some comments from Microsoft).
  • Vendors in all industries are lining up for the second coming of Green marketing or perhaps Green Washing 2.0

The new Green IT, moving beyond Green wash and hype

Speaking of Green IT including Green Computing, Green Storage, Virtualization, Cloud, Federation and more, here is a link to a post that I did back in February discussing how the Green Gap continues to exist.

The green gap exists and centers around the confusion of what Green means along with the common disconnects between core IT issues or barriers to becoming more efficient, effective, flexible and optimized from both an economic as well as environmental basis to those commonly messaged to under the green umbrella (read more here).

Regardless of where you stand on Green, Green washing, Green hype, environmentalism, eco-tech and other related themes, for at least a moment, set aside the politics and science debates and think in terms of practicality and economics.

That is, look for simple, recurring things that can be done to stretch your dollar or spending ability in order to support demand (See figure below) in a more effective manner along with reducing waste. For example to meet growing demand requirements in the face of shrinking or stagnate budgets, the action is to stretch available resources to do more work when needed, or retain more where applicable with the same or less footprint. What this means is that while common messaging is around reducing costs, look at the inverse which is to do more with available budgets or resources. The result is green in terms of economic and environmental benefits.

IT Resource demand
Increasing IT Resource Demand

Green IT wheel of oppourtunity
Green IT enablement techniques and technologies

Look at and understand the broader aspects of being green which has both economical and environmental benefits without compromising on productivity or functionality. There are many aspects or facets of being green beyond those commonly discussed or perceived to be so (See Green IT enablement techniques and technologies figure above).

Certainly recycling of paper, water, aluminum, plastics and other items including technology equipment are important to reduce waste and are things to consider. Another aspect of reducing waste particularly in IT is to avoid rework that can range from finding network bottlenecks or problems that result in continuous retransmission of data for failed backup, replication or data transfers that cause lost opportunity or resource consumption. Likewise programming errors (bugs) or miss configuration that results in rework or lost productivity also are forms of waste among others.

Another theme is that of shifting from energy avoidance to energy efficiency and effectiveness which are often thought to the same. However the expanded focus is also about getting more work done when needed with the same or less resources (See figure below) for example increasing activity (IOPS, transactions, emails or video served, bandwidth or messages) per watt of energy consumed.

From energy avoidence to effectiveness
Shifting from energy avoidance to effectiveness

One of the many techniques and approaches for addressing energy including stretching resources and being green include intelligent power management (IPM). With IPM, the focus is not strictly centered around energy avoidance, instead about inteligently adapting to different workloads or activity balancing performance and energy. Thus when there is work to be done, get the work done quickly with as little energy as possible (IOP or activity per watt), when there is less work, provide lower performance and thus smaller energy requirements, or when no work to be done, going into additional energy saving modes. Thus power management does not have to be exclusively about turrning off the lights or IT equipment in order to be green.

The following two figures look at Green IT past, present and future with an expanding focus around optimization and effectiveness meaning getting more work done, storing more data for longer periods of time, meeting growth demands with what appears to be additional resources however at a lower per unit cost without compromising on performance, availability or economics.

Green IT wheel of oppourtunity
Green IT: Past, present and future shift from avoidance to efficiency and effectiveness

Green IT wheel of oppourtunity
The new Green IT: Boosting business effectiveness, maximize ROI while helping the environment

If you think about going green as simply doing or using things more effectively, reducing waste, working more intelligently or effectively the benefits are both economical and environmentally positive (See the two figures above).

Instead of finding ways to fund green initiatives, shift the focus to how you can enable enhanced productivity, stretching resources further, doing more in the same or smaller footprint (floor space, power, cooling, energy, personal, licensing, budgets) for business economic and environmental sustainability with the result being environmental encampments.

Also keep in mind that small percentage changes on a large or recurring basis have significant benefits. For example a small change in cooling temperatures while staying within vendor guideline recommendations can result in big savings for large environments.

 

Bottom line

If you are a business and discounting green as simply a fad, or perhaps as a public relations (PR) initiative or activity tied to reducing carbon footprints and recycling then you are missing out on economic (top and bottom line) enhancement opportunities.

Likewise if you think that going green is only about the environment, then there is a missed opportunity to boost economic opportunities to help fund those inititiaves.

Going green means many different things to various people and is often more broad and common sense based than most realize.

That is all for now, happy earth day 2010

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Its US Census time, What about IT Data Centers?

It is that once a decade activity time this year referred to as the US 2010 Census.

With the 2010 census underway, not to mention also time for completing and submitting your income tax returns, if you are in IT, what about measuring, assessing, taking inventory or analyzing your data and data center resources?

US 2010 Cenus formsUS 2010 Cenus forms
Figure 1: IT US 2010 Census forms

Have you recently taken a census of your data, data storage, servers, networks, hardware, software tools, services providers, media, maintenance agreements and licenses not to mention facilities?

Likewise have you figured out what if any taxes in terms of overhead or burden exists in your IT environment or where opportunities to become more optimized and efficient to get an IT resource refund of sorts are possible?

If not, now is a good time to take a census of your IT data center and associated resources in what might also be called an assessment, review, inventory or survey of what you have, how its being used, where and who is using and when along with associated configuration, performance, availability, security, compliance coverage along with costs and energy impact among other items.

IT Data Center Resources
Figure 2: IT Data Center Metrics for Planning and Forecasts

How much storage capacity do you have, how is it allocated along with being used?

What about storage performance, are you meeting response time and QoS objectives?

Lets not forget about availability, that is planned and unplanned downtime, how have your systems been behaving?

From an energy or power and cooling standpoint, what is the consumption along with metrics aligned to productivity and effectiveness. These include IOPS per watt, transactions per watt, videos or email along with web clicks or page views per watt, processor GHz per watt along with data movement bandwidth per watt and capacity stored per watt in a given footprint.

Other items to look into for data centers besides storage include servers, data and I/O networks, hardware, software, tools, services and other supplies along with physical facility with metrics such as PUE. Speaking of optimization, how is your environment doing, that is another advantage of doing a data center census.

For those who have completed and sent in your census material along with your 2009 tax returns, congratulations!

For others in the US who have not done so, now would be a good time to get going on those activities.

Likewise, regardless of what country or region you are in, its always a good time to take a census or inventory of your IT resources instead of waiting every ten years to do so.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Inaugural StorageIO Newsletter

Welcome to the winter 2010 edition of the Server and StorageIO (StorageIO) news letter. This inaugural edition of the StorageIO news letter coincides with our 5th year in business along with recent web site and blog enhancements.

In an age of social media including facebook, twitter, blogs and video, some might ask the question of why a news letter, after all, is that not old school or non social media?

For those who are immersed into twitter, blogs, facebook, feeds and other Web 2.0 means of communication, a traditional newsletter might not be in vogue.

StorageIO News Letter Image
Winter 2010 Newsletter
(Inaugural Edition)

However, realizing that there is still a large percentage of the population which also means a vast number of visitors and guest of StorageIO web sites and blogs or viewers of articles along with other content that do not use twitter, facebook, LinkedIn or RSS feeds, I realize that there is still a role for a newsletter.

Thus, it makes sense to bring info to those of you who prefer a traditional news letter format via email or other subscription, however this newsletter is available in HTML or PDF formats.

You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the inaugural newsletter as HTML or PDF or, to go to the newsletter page.

Follow via Goggle Feedburner here or via email subscription here.

You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

Enjoy this inaugural edition of the StorageIO newsletter, let me know your comments and feedback.

Also, a very big thank you to everyone who has helped make StorageIO a success!.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Green IT, Green Gap, Tiered Energy and Green Myths

There are many different aspects of Green IT along with several myths or misperceptions not to mention missed opportunities.

There is a Green Gap or disconnect between environmentally aware, focused messaging and core IT data center issues. For example, when I ask IT professionals whether they have or are under direction to implement green IT initiatives, the number averages in the 10-15% range.

However, when I ask the same audiences who has or sees power, cooling, floor space, supporting growth, or addressing environmental health and safety (EHS) related issues, the average is 75 to 90%. What this means is a disconnect between what is perceived as being green and opportunities for IT organizations to make improvements from an economic and efficiency standpoint including boosting productivity.

 

Some IT Data Center Green Myths
Is “green IT” a convenient or inconvenient truth or a legend?

When it comes to green and virtual environments, there are plenty of myths and realities, some of which vary depending on market or industry focus, price band, and other factors.

For example, there are lines of thinking that only ultra large data centers are subject to PCFE-related issues, or that all data centers need to be built along the Columbia River basin in Washington State, or that virtualization eliminates vendor lock-in, or that hardware is more expensive to power and cool than it is to buy.

The following are some myths and realities as of today, some of which may be subject to change from reality to myth or from myth to reality as time progresses.

Myth: Green and PCFE issues are applicable only to large environments.

Reality: I commonly hear that green IT applies only to the largest of companies. The reality is that PCFE issues or green topics are relevant to environments of all sizes, from the largest of enterprises to the small/medium business, to the remote office branch office, to the small office/home office or “virtual office,” all the way to the digital home and consumer.

 

Myth: All computer storage is the same, and powering disks off solves PCFE issues.

Reality: There are many different types of computer storage, with various performance, capacity, power consumption, and cost attributes. Although some storage can be powered off, other storage that is needed for online access does not lend itself to being powered off and on. For storage that needs to be always online and accessible, energy efficiency is achieved by doing more with less—that is, boosting performance and storing more data in a smaller footprint using less power.

 

Myth: Servers are the main consumer of electrical power in IT data centers.

Reality: In the typical IT data center, on average, 50% of electrical power is consumed by cooling, with the balance used for servers, storage, networking, and other aspects. However, in many environments, particularly processing or computation intensive environments, servers in total (including power for cooling and to power the equipment) can be a major power draw.

 

Myth: IT data centers produce 2 to 8% of all global Carbon Dioxide (CO2) and carbon emissions.

Reality:  Thus might be perhaps true, given some creative accounting and marketing math in order to help build a justification case or to scare you into doing something. However, the reality is that in the United States, for example, IT data centers consume around 2 to 4% of electrical power (depending on when you read this), and less than 80% of all U.S. CO2 emissions are from electrical power generation, so the math does not quite add up. The reality is this, if no action is taken to improve IT data center energy efficiency, continued demand growth will shift IT power-related emissions from myth to reality, not to mention cause constraints on IT and business sustainability from an economic and productivity standpoint.

Myth: Server consolidation with virtualization is a silver bullet to address PCFE issues.

Reality: Server virtualization for consolidation is only part of an overall solution that should be combined with other techniques, including lower power, faster and more energy efficient servers, and improved data and storage management techniques.

 

Myth: Hardware costs more to power than to purchase.

Reality: Currently, for some low-cost servers, standalone disk storage, or entry level networking switches and desktops, this may be true, particularly where energy costs are excessively high and the devices are kept and used continually for three to five years. A general rule of thumb is that the actual cost of most IT hardware will be a fraction of the price of associated management and software tool costs plus facilities and cooling costs. For the most part, at least as of this writing, small standalone individual hard disk drives or small entry level volume servers can be bought and then used in locations that have very high electrical costs over a three  to five year time frame.

 

Regarding this last myth, for the more commonly deployed external storage systems across all price bands and categories, generally speaking, except for extremely inefficient and hot running legacy equipment, the reality is that it is still cheaper to power the equipment than to buy it. Having said that, there are some qualifiers that should also be used as key indicators to keep the equation balanced. These qualifiers include the acquisition cost  if any, for new, expanded, or remodeled habitats or space to house the equipment, the price of energy in a given region, including surcharges, as well as cooling, length of time, and continuous time the device will be used.

For larger businesses, IT equipment in general still costs more to purchase than to power, particularly with newer, more energy efficient devices. However, given rising energy prices, or the need to build new facilities, this could change moving forward, particularly if a move toward energy efficiency is not undertaken.

There are many variables when purchasing hardware, including acquisition cost, the energy efficiency of the device, power and cooling costs for a given location and habitat, and facilities costs. For example, if a new storage solution is purchased for $100,000, yet new habitat or facilities must be built for three to five times the cost of the equipment, those costs must be figured into the purchase cost.

Likewise, if the price of a storage solution decreases dramatically, but the device consumes a lot of electrical power and needs a large cooling capacity while operating in a region with expensive electricity costs, that, too, will change the equation and the potential reality of the myth.

 

Tiered Energy Sources
Given that IT resources and facilitated require energy to power equipment as well as keep them cool, electricity are popular topics associated with Green IT, economics and efficiency with lots of metrics and numbers tossed around. With that in mind, the U.S. national average CO2 emission is 1.34 lb/kWh of electrical power. Granted, this number will vary depending on the region of the country and the source of fuel for the power-generating station or power plant.

Like IT tiered resources (Servers, storage, I/O networks, virtual machines and facilities) of which there are various tiers or types of technologies to meet various needs, there are also multiple types of energy sources. Different tiers of energy sources vary by their cost, availability and environmental characteristics among others. For example, in the US, there are different types of coal and not all coal is as dirty when combined with emissions air scrubbers as you might be lead to believe however there are other energy sources to consider as well.

Coal continues to be a dominant fuel source for electrical power generation both in the United States and abroad, with other fuel sources, including oil, gas, natural gas, liquid propane gas (LPG or propane), nuclear, hydro, thermo or steam, wind and solar. Within a category of fuel, for example, coal, there are different emissions per ton of fuel burned. Eastern U.S. coal is higher in CO2 emissions per kilowatt hour than western U.S. lignite coal. However, eastern coal has more British thermal units (Btu) of energy per ton of coal, enabling less coal to be burned in smaller physical power plants.

If you have ever noticed that coal power plants in the United States seem to be smaller in the eastern states than in the Midwest and western states, it’s not an optical illusion. Because eastern coal burns hotter, producing more Btu, smaller boilers and stockpiles of coal are needed, making for smaller power plant footprints. On the other hand, as you move into the Midwest and western states of the United States, coal power plants are physically larger, because more coal is needed to generate 1 kWh, resulting in bigger boilers and vent stacks along with larger coal stockpiles.

On average, a gallon of gasoline produces about 20 lb of CO2, depending on usage and efficiency of the engine as well as the nature of the fuel in terms of octane or amount of Btu. Aviation fuel and diesel fuel differ from gasoline, as does natural gas or various types of coal commonly used in the generation of electricity. For example, natural gas is less expensive than LPG but also provides fewer Btu per gallon or pound of fuel. This means that more natural gas is needed as a fuel to generate a given amount of power.

Recently, while researching small, 10 to 12 kWh standby generators for my office, I learned about some of the differences between propane and natural gas. What I found was that with natural gas as fuel, a given generator produced about 10.5 kWh, whereas the same unit attached to a LPG or propane fuel source produced 12 kWh. The trade off was that to get as much power as possible out of the generator, the higher cost LPG was the better choice. To use lower cost fuel but get less power out of the device, the choice would be natural gas. If more power was needed, than a larger generator could be deployed to use natural gas, with the trade off of requiring a larger physical footprint.

Oil and gas are not used as much as fuel sources for electrical power generation in the United States as in other countries such as the United Kingdom. Gasoline, diesel, and other petroleum based fuels are used for some power plants in the United States, including standby or peaking plants. In the electrical power G and T industry as in IT, where different tiers of servers and storage are used for different applications there are different tiers of power plants using different fuels with various costs. Peaking and standby plants are brought online when there is heavy demand for electrical power, during disruptions when a lower cost or more environmentally friendly plant goes offline for planned maintenance, or in the event of a trip or unplanned outage.

CO2 is commonly discussed with respect to green and associated emissions however there are other so called Green Houses Gases including Nitrogen Dioxide (NO2) and water vapors among others. Carbon makes up only a fraction of CO2. To be specific, only about 27% of a pound of CO2 is carbon; the balance is not. Consequently, carbon emissions taxes schemes (ETS), as opposed to CO2 tax schemes, need to account for the amount of carbon per ton of CO2 being put into the atmosphere. In some parts of the world, including the EU and the UK, ETS are either already in place or in initial pilot phases, to provide incentives to improve energy efficiency and use.

Meanwhile, in the United States there are voluntary programs for buying carbon offset credits along with initiatives such as the carbon disclosure project. The Carbon Disclosure Project (www.cdproject.net) is a not for profit organization to facilitate the flow of information pertaining to emissions by organizations for investors to make informed decisions and business assessment from an economic and environmental perspective. Another voluntary program is the United States EPA Climate Leaders initiative where organizations commit to reduce their GHG emissions to a given level or a specific period of time.

Regardless of your stance or perception on green issues, the reality is that for business and IT sustainability, a focus on ecological and, in particular, the corresponding economic aspects cannot be ignored. There are business benefits to aligning the most energy efficient and low power IT solutions combined with best practices to meet different data and application requirements in an economic and ecologically friendly manner.

Green initiatives need to be seen in a different light, as business enables as opposed to ecological cost centers. For example, many local utilities and state energy or environmentally concerned organizations are providing funding, grants, loans, or other incentives to improve energy efficiency. Some of these programs can help offset the costs of doing business and going green. Instead of being seen as the cost to go green, by addressing efficiency, the by products are economic as well as ecological.

Put a different way, a company can spend carbon credits to offset its environmental impact, similar to paying a fine for noncompliance or it can achieve efficiency and obtain incentives. There are many solutions and approaches to address these different issues, which will be looked at in the coming chapters.

What does this all mean?
There are real things that can be done today that can be effective toward achieving a balance of performance, availability, capacity, and energy effectiveness to meet particular application and service needs.

Sustaining for economic and ecological purposes can be achieved by balancing performance, availability, capacity, and energy to applicable application service level and physical floor space constraints along with intelligent power management. Energy economics should be considered as much a strategic resource part of IT data centers as are servers, storage, networks, software, and personnel.

The bottom line is that without electrical power, IT data centers come to a halt. Rising fuel prices, strained generating and transmission facilities for electrical power, and a growing awareness of environmental issues are forcing businesses to look at PCFE issues. IT data centers to support and sustain business growth, including storing and processing more data, need to leverage energy efficiency as a means of addressing PCFE issues. By adopting effective solutions, economic value can be achieved with positive ecological results while sustaining business growth.

Some additional links include:

Want to learn or read more?

Check out Chapter 1 (Green IT and the Green Gap, Real or Virtual?) in my book “The Green and Virtual Data Center” (CRC) here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Does IBM Power7 processor announcement signal storage upgrades?

IBM recently announced the Power7 as the latest generation of processors that the company uses in some of its mid range and high end compute servers including the iSeries and pSeries.


IBM Power7 processor wafers (chips)

 

What is the Power7 processor?
The Power7 is the latest generation of IBM processors (chips) that are used as the CPUs in IBM mid range and high end open systems (pSeries) for Unix (AIX) and Linux as well as for the iSeries (aka AS400 successor). Building on previous Power series processors, the Power7 increases the performance per core (CPU) along with the number of cores per socket (chip) footprint. For example, each Power7 chip that plugs into a socket on a processor card in a server can have up to 8 cores or CPUs. Note that sometimes cores are also known as micro CPUs as well as virtual CPUs not to be confused with their presented via Hypervisor abstraction.

Sometimes you may also here the term or phrase 2 way, 4 way (not to be confused with a Cincinnati style 4 way chili) or 8 way among others that refers to the number of cores on a chip. Hence, a dual 2 way would be a pair of processor chips each with 2 cores while a quad 8 way would be 4 processors chips each with 8 cores and so on.


IBM Power7 with up to eight cores per processor (chip)

In addition to faster and more cores in a denser footprint, there are also energy efficiency enhancements including Energy Star for enterprise servers qualification along with intelligent power management (IPM also see here) implementation. IPM is implanted in what IBM refers to as Intelligent Energy technology for turning on or off various parts of the system along with varying processor clock speeds. The benefit is when there is work to be done, get it down quickly or if there is less work, turn some cores off or slow clock speed down. This is similar to what other industry leaders including Intel have deployed with their Nehalem series of processors that also support IPM.

Additional features of the Power7 include (varies by system solutions):

  • Energy Star for server qualified providing enhanced performance and efficiency.
  • IBM Systems Director Express, Standard and Enterprise Editions for simplified management including virtualization capabilities across pools of Power servers as a single entity.
  • PowerVM (Hypervisor) virtualization for AIX, iSeries and Linux operating systems.
  • ActiveMemory enables effective memory capacity to be larger than physical memory, similar to how virtual memory works within many operating systems. The benefit is to enable a partition to have access to more memory which is important for virtual machines along with the ability to support more partitions in a given physical memory footprint.
  • TurboCore and Intelligent Threads enable workload optimization by selecting the applicable mode for the work to be done. For example, single thread per core along with simultaneous threads (2 or 4) modes per core. The trade off is to have more threads per core for concurrent processing, or, fewer threads to boost single stream performance.

IBM has announced several Power7 enabled or based server system models with various numbers of processors and cores along with standalone and clustered configurations including:

IBM Power7 family of server systems

  • Power 750 Express, 4U server with one to four socket server supporting up to 32 cores (3.0 to 3.5 GHz) and 128 threads (4 threads per core), PowerVM (Hypervisor) along with main memory capacity of 512GB or 1TByte of virtual memory using Active Memory Expansion.
  • Power 755, 32 3.3Ghz Power7 cores (8 cores per processor) with memory up to 256GB along with AltiVec and VSX SIMD instruction set support. Up to 64 755 nodes each with 32 cores can be clustered together for high performance applications.
  • Power 770, Up to 64 Power7 cores providing more performance while consuming less energy per core compared to previous Power6 generations. Support for up to 2TB of main memory or RAM using 32GB DIMM when available later in 2010.
  • Power 780, 64 Power7 cores with TurboCore workload optimization providing performance boost per core. With TurboCore, 64 cores can operate at 3.8 GHz, or, enable up to 32 cores at 4.1 GHz and twice the amount of cache when more speed per thread is needed. Support for up to 2TB of main memory or RAM using 32GB DIMM when available later in 2010.

Additional Power7 specifications and details can be found here.

 

What is the DS8000?
The DS8000 is the latest generation of a family of high end enterprise class storage systems supporting IBM mainframe (zSeries), Open systems along with mixed workloads. Being high end open systems or mainframe, the DS8000 competes with similar systems from EMC (Symmetrix/DMX/VMAX), Fujitsu (Eternus DX8000), HDS (Hitachi) and HP (XP series OEM from Hitachi). Previous generations of the DS8000 (aka predecessors) include the ESS (Enterprise Storage System) Model 2105 (aka Shark) and VSS (Versatile Storage Server). Current generation family members include the Power5 based DS8100 and DS8300 along with the Power6 based DS8700.

IBM DS8000 Storage System

Learn more about the DS8000 here, here, here and here.

 

What is the association between the Power7 and DS8000?
Disclosure: Before I go any further, lets be clear on something, what I am about to post on is based entirely on researching, analyzing, correlating (connecting the dots) of what is publicly and freely available from IBM on the Web (e.g. there is no NDA material being disclosed here that I am aware of) along with prior trends and tendency of IBM and their solutions. In other words, you can call it speculation, a prediction, industry analysis perspective, looking into the proverbial crystal ball or educated guess and thus should not be taken as an indicator of what IBM may actually do or be working on. As to what may actually be done or not done, for that you will need to contact one of the IBM truth squad members.

As to what is the linkage between Power7 and the DS8000?

The linkage between the Power7 and the DS8000 is just that, the Power processors!

At the heart of the DS8000 are Power series processors coupled or clustered together in pairs for performance and availability that run IBM developed storage systems software. While the spin doctors may not agree, essentially the DS8000 and its predecessors are based on and around Power series processors clustered together with a high speed interconnect that combine to host an operating system and IBM developed storage system application software.

Thus IBM has been able to for over a decade leverage technology improvement curve advantages with faster processors, increased memory and I/O connectivity in denser footprints while enhancing their storage system application software.

Given that the current DS8000 family members utilize 2 way (2 core) or 4 way (4 core) Power5 and Power6 processors, similar to how their predecessors utilized previous generation Power4, Power3 and so forth processors, it only makes sense that IBM might possibly use a Power7 processor in a future DS8000 (or derivative perhaps even with a different name or model number). Again, this is just based all on historical trends and patterns of IBM storage systems group leveraging the latest generation of Power processors; after all, they are a large customer of the Power systems group.

Consequently it would make sense for IBM storage folks to leverage the new Power7 processors and features similar to how EMC is leveraging Intel processor enhances along with what other vendors are doing.

There is certainly room in the DS8000 architecture for growth in terms of supporting additional nodes or complexes or controllers (or whatever your term preference of choice is for describing a server) each equipped with multiple processors (chips or sockets) that have multiple cores. While IBM has only commercially released two complex or dual server versions of the DS8000 with various numbers of cores per server, they have come nowhere close to their architecture limit of nodes. In fact with this release of Power7, as an example, the model 755 can be clustered via InfiniBand with up to 64 nodes, with each node having 4 sockets (e.g. 4 way) with up to 8 cores each. That means on paper, 64 x 4 x 8 = 2048 cores and each core could have up to 4 threads for concurrency, or half as many cores for more cache performance. Now will IBM ever come out with a 64 node DS8000 on steroids?

Tough to say, maybe possibly some day to play specmanship vs EMC VMAX 256 node architectural limit, however Im not holding my breath just yet. Thus with more and faster cores per processor, ability to increase number of processors per server or node, along with architectural capabilities to boost the number of nodes in an instance or cluster, on paper alone, there is lots of head room for the DS8000 or a future derivative.

What about software and functionality, sure IBM could in theory simply turn the crank and use a new hardware platform that is faster, more capacity, denser, better energy efficiency, however what about new features?

Can IBM enhance its storage systems application software that it evolved from the ESS with new features to leverage underlying hardware capabilities including TurboCore, PowerVM, device and I/O sharing, Intelligent Energy efficiency along with threads enhancements?

Can IBM leverage those and other features to support not only scaling of performance, availability, capacity and energy efficiency in an economical manner, however also add features for advanced automated tiering or data movement plus other popular industry buzzword functionality?

 

Additional thoughts and perspectives
One of the things I find interesting is that some IBM folks along with their channel partners will go to great lengths to explain why and how the DS8000 is not just a pair of Power enabled based servers tightly coupled together. Yet, on the other hand, some of those folks will go to great lengths touting the advantages of leveraging off the shelf or commercial enabled servers based on Intel or AMD based systems such as IBMs own XIV storage solution.

I can understand in the past when the likes of EMC, Hitachi and Fujitsu were all competing with IBM building bigger and more function rich monolithic systems, however that trend is shifting. The trend now as is being seen with EMC and VMAX is to decouple and leverage more off the shelf commercially available technology combined with custom ASICs where and when needed.

Thus at a time where more attention and discussion is around clustered, grid, scalable storage systems, will we see or hear the IBM folks change their tune about the architectural scale up and out capabilities of the Power enabled DS8000 family?

There had been some industry speculation that the DS8000 would be the end of the line if the Power7 had not been released which will now (assuming that IBM leverages the Power7 for storage) shift to if there will be a Power8 or Power9 and so forth.

From a storage perspective, is the DS8K still relevant?

I say yes given its installed base and need for IBM to have an enterprise solution (sorry, IMHO XIV does not fit that bill just yet) of their own, lest they cut an OEM deal with the likes of Hitachi or Fujitsu which while possible, I do not see it as likely near term. Another soft point on its relevance is to gauge reaction from their competitors including EMC and HDS.

From a server perspective, what is the benefit of the new Power7 enabled servers from IBM?

Simple, increase scale of performance for single thread as well as concurrent or parallel application workloads.

In other words, supporting more web sites, partitions for virtual machines and guest operating system instances, databases, compute and other applications that demand performance and economy of scale.

This also means that IBM has a platform to aggressively go after Sun Solaris server customers with a lifeline during the Oracle transition, not to mention being a platform for running Oracle in addition to its own UDB/DB2 database. In addition to being a platform for Unix AIX as well as Linux, the Power7 series also are at the heart of current generation iSeries (the server formerly known as the AS400).

Additional links and resources:

Closing comments (for now):
Given IBMs history of following a Power chip enhancement with a new upgraded version of the DS8000 (or ESS/2105 aka Shark/VSS) and its predecessors by a reasonable amount of time, I would be surprised if we do not see a new DS8000 (perhaps even renamed or renumbered) within the year.

This is similar to how other vendors leverage new processor chip technology evolution to pace their systems upgrades for example how many vendors who leverage Intel processes have done announcements over the past year since the Nehalem series rolled out including EMC among others.

Lets see what the IBM truth squads have to say, or, not have to say :)

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

California Center for Sustainable Energy (CCSE)



CCSE Facility and Seminar Series

This past week I had the honor of delivering a keynote presentation in San Diego at the California Center for Sustainable Energy (CCSE) as part of their continuing education and community outreach and education, workshop and seminar series. The theme of the well attended event was Next Generation Data Center Solutions of which my talk centered around leveraging Green and Virtual Data Centers for enabling efficiencey and effectiveness. In addition to my keynote, included a panel discussion that I moderated with representatives of the events sponsor Compucom, along with their special guests APC, HP, Intel and VMware.

The CCSE has a focus around Climate Change, Energy Efficienecey, Green Buildings, Renewable Energy, Transportation, Home and Business. Their services and focus includes awareness and outreach, education programs, library and tools, consultant and associated services. Speaking of their library, there is even a signed copy of my book The Green and Virtual Data Center (CRC) now at the CCSE library that can be checked out along with their other resources.

The CCSE staff and facilities were fantastic with hosts Mike Bigelow (an energy engineer) and Marlene King (program manager) orchestrating a great event.

If you are in the San Diego area, check out the CCSE located at 8690 Balboa Ave., Suite 100. They have a great library, cool demonstrations and tools that you can check out to assist with optimization IT data centers from an energy efficicinecy standpoint. Learn more about the CCSE here.

Following are some relevant links to the keynote along with panel discussion from the CCSE event:

Follow these links to view additional videos or podcasts, tips, articles, books, reports and events.

Cheers
gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Technorati tags: Trends

EPA Server and Storage Workshop Feb 2, 2010

EPA Energy Star

Following up on a recent previous post pertaining to US EPA Energy Star(r) for Servers, Data Center Storage and Data Centers, there will be a workshop held Tuesday February 2, 2010 in San Jose, CA.

Here is the note (Italics added by me for clarity) from the folks at EPA with information about the event and how to participate.

 

Dear ENERGY STAR® Servers and Storage Stakeholders:

Representatives from the US EPA will be in attendance at The Green Grid Technical Forum in San Jose, CA in early February, and will be hosting information sessions to provide updates on recent ENERGY STAR servers and storage specification development activities.  Given the timing of this event with respect to ongoing data collection and comment periods for both product categories, EPA intends for these meetings to be informal and informational in nature.  EPA will share details of recent progress, identify key issues that require further stakeholder input, discuss timelines for the completion, and answer questions from the stakeholder community for each specification.

The sessions will take place on February 2, 2010, from 10:00 AM to 4:00 PM PT, at the San Jose Marriott.  A conference line and Webinar will be available for participants who cannot attend the meeting in person.  The preliminary agenda is as follows:

Servers (10:00 AM to 12:30 PM)

  • Draft 1 Version 2.0 specification development overview & progress report
    • Tier 1 Rollover Criteria
    • Power & Performance Data Sheet
    • SPEC efficiency rating tool development
  • Opportunities for energy performance data disclosure

 

Storage (1:30 PM to 4:00 PM)

  • Draft 1 Version 1.0 specification development overview & progress report
  • Preliminary stakeholder feedback & lessons learned from data collection 

A more detailed agenda will be distributed in the coming weeks.  Please RSVP to storage@energystar.gov or servers@energystar.gov no later than Friday, January 22.  Indicate in your response whether you will be participating in person or via Webinar, and which of the two sessions you plan to attend.

Thank you for your continued support of ENERGY STAR.

 

End of EPA Transmission

For those attending the event, I look forward to seeing you there in person on Tuesday before flying down to San Diego where I will be presenting on Wednesday the 3rd at The Green Data Center Conference.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

EPA Energy Star for Data Center Storage Update

Following up on previous posts pertaining to US EPA Energy Star for Servers, Data Center Storage and Data Centers, here is a note received today with some new information. For those interested in the evolving Energy Star for Data Center, Servers and Storage, have a look at the following as well as the associated links.

Here is the note from EPA:

From: ENERGY STAR Storage [storage@energystar.gov]
Sent: Monday, December 28, 2009 8:00 AM
Subject: ENERGY STAR Data Center Storage Initial Data Collection Procedure

EPA Energy Star

Dear ENERGY STAR Data Center Storage Stakeholder or Other Interested Party:

The U.S. Environmental Production Agency (EPA) would like to invite interested parties to test the energy performance of storage products that are currently being considered for inclusion in the Version 1.0 ENERGY STAR® Data Center Storage specification. Please review the attached cover letter, data collection procedure, and test data collection sheet for further information.

Stakeholders are encouraged to submit test data via e-mail to storage@energystar.gov no later than Friday, February 12, 2009.

Thank you for your continued support of ENERGY STAR!

Attachment Links:

Storage Initial Data Collection Procedure.pdf

Storage Initial Data Collection Cover Letter.pdf

Storage Initial Data Collection Data Sheet.xls

For more information, visit: www.energystar.gov

 

For those interested in EPA Energy Star, Green IT including Green and energy efficient storage, check out these following links:

Watch for more news and updates pertaining to EPA Energy Star for Servers, Data Center Storage and Data centers in 2010.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Is MAID Storage Dead? I Dont Think So!

Some vendors are doing better than others and first generation MAID (Massive or monolithic Array of Idle Disks) might be dead or about to be deceased, spun down or put into a long term sleep mode, it is safe to say that second generation MAID (e.g. MAID 2.0) also known as intelligent power management (IPM) is alive and doing well.

In fact, IPM is not unique to disk storage or disk drives as it is also a technique found in current generation of processors such as those from Intel (e.g. Nehalem) and others.

Other names for IPM include adaptive voltage scaling (AVS), adaptive voltage scaling optimized (AVSO) and adaptive power management (APM) among others.

The basic concept is to vary the amount of power being used to the amount of work and service level needed at a point in time and on a granular basis.

For example, first generation MAID or drive spin down as deployed by vendors such as Copan, which is rumored to be in the process of being spun down as a company (see blog post by a former Copan employee) were binary. That is, a disk drive was either on or off, and, that the granularity was the entire storage system. In the case of Copan, the granularly was that a maximum of 25% of the disks could ever be spun up at any point in time. As a point of reference, when I ask IT customers why they dont use MAID or IPM enabled technology they commonly site concerns about performance, or more importantly, the perception of bad performance.

CPU chips have been taking the lead with the ability to vary the voltage and clock speed, enabling or disabling electronic circuitry to align with amount of work needing to be done at a point in time. This more granular approach allows the CPU to run at faster rates when needed, slower rates when possible to conserve energy (here, here and here).

A common example is a laptop with technology such as speed step, or battery stretch saving modes. Disk drives have been following this approach by being able to vary their power usage by adjusting to different spin speeds along with enabling or disabling electronic circuitry.

On a granular basis, second generation MAID with IPM enabled technology can be done on a LUN or volume group basis across different RAID levels and types of disk drives depending on specific vendor implementation. Some examples of vendors implementing various forms of IPM for second generation MAID to name a few include Adaptec, EMC, Fujitsu Eternus, HDS (AMS), HGST (disk drives), Nexsan and Xyratex among many others.

Something else that is taking place in the industry seems to be vendors shying away from using the term MAID as there is some stigma associated with performance issues of some first generation products.

This is not all that different than what took place about 15 years ago or so when the first purpose built monolithic RAID arrays appeared on the market. Products such as the SF2 aka South San Francisco Forklift company product called Failsafe (here and here) which was bought by MTI with patents later sold to EMC.

Failsafe, or what many at DEC referred to as Fail Some was a large refrigerator sized device with 5.25” disk drives configured as RAID5 with dedicated hot spare disk drives. Thus its performance was ok for the time doing random reads, however writes in the pre write back cache RAID5 days was less than spectacular.

Failsafe and other early RAID (and here) implementations received a black eye from some due to performance, availability and other issues until best practices and additional enhancements such as multiple RAID levels appeared along with cache in follow on products.

What that trip down memory (or nightmare) lane has to do with MAID and particularly first generation products that did their part to help establish new technology is that they also gave way to second, third, fourth, fifth, sixth and beyond generations of RAID products.

The same can be expected as we are seeing with more vendors jumping in on the second generation of MAID also known as drive spin down with more in the wings.

Consequently, dont judge MAID based solely on the first generation products which could be thought of as advanced technology production proof of concept solutions that will have paved the way for follow up future solutions.

Just like RAID has become so ubiquitous it has been declared dead making it another zombie technology (dead however still being developed, produced, bought and put to use), follow on IPM enabled generations of technology will be more transparent. That is, similar to finding multiple RAID levels in most storage, look for IPM features including variable drive speeds, power setting and performance options on a go forward basis. These newer solutions may not carry the MAID name, however the sprit and function of intelligent power management without performance compromise does live on.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved