Back to school shopping: Dude, Dell Digests 3PAR Disk storage

Dell

No sooner has the dust settled from Dells other recent acquisitions, its back to school shopping time and the latest bargain for the Round Rock Texas folks is bay (San Francisco) area storage vendor 3PAR for $1.15B. As a refresh, some of Dells more recent acquisitions including a few years ago $1.4B for EqualLogic, $3.9B for Perot systems not to mention Exanet, Kace and Ocarina earlier this year. For those interested, as of April 2010 reporting figures found here, Dell showed about $10B USD in cash and here is financial information on publicly held 3PAR (PAR).

Who is 3PAR
3PAR is a publicly traded company (PAR) that makes a scalable or clustered storage system with many built in advanced features typically associated with high end EMC DMX and VMAX as well as CLARiiON, in addition to Hitachi or HP or IBM enterprise class solutions. The Inserv (3PARs storage solution) combines hardware and software providing a very scalable solution that can be configured for smaller environments or larger enterprise by varying the number of controllers or processing nodes, connectivity (server attachment) ports, cache and disk drives.

Unlike EqualLogic which is more of a mid market iSCSI only storage system, the 3PAR Inserv is capable of going head to head with the EMC CLARiiON as well as DMC or VMAX systems that support a mix of iSCSI and Fibre Channel or NAS via gateway or appliances. Thus while there were occasional competitive situations between 3PAR and Dell EqualLogic, they for the most part were targeted at different market sectors or customers deployment scenarios.

What does Dell get with 3PAR?

  • A good deal if not a bargain on one of the last new storage startup pure plays
  • A public company that is actually generating revenue with a large and growing installed base
  • A seasoned sales force who knows how to sell into the enterprise storage space against EMC, HP, IBM, Oracle/SUN, Netapp and others
  • A solution that can scale in terms of functionality, connectivity, performance, availability, capacity and energy efficiency (PACE)
  • Potential route to new markets where 3PAR has had success, or to bridge gaps where both have played and competed in the past
  • Did I say a company with an established footprint of installed 3PAR Inserv storage systems and good list of marquee customers
  • Ability to sell a solution that they own the intellectual property (IP) instead of that of partner EMC
  • Plenty of IP that can be leveraged within other Dell solutions, not to mention combine 3PAR with other recently acquired technologies or companies.

On a lighter note, Dell picks up once again Marc Farley who was with them briefly after the EqualLogic acquisition who then departed to 3PAR where he became director of social media including launch of Infosmack on Storage Monkeys with co host Greg Knieriemen (@Knieriemen). Of course the twitter world and traditional coconut wires are now speculating where Farley will go next that Dell may end up buying in the future.

What does this mean for Dell and their data storage portfolio?
While in no ways all inclusive or comprehensive, table 1 provides a rough framework of different price bands, categories, tiers and market or application segments requiring various types of storage solutions where Dell can sell into.

 

HP

Dell

EMC

IBM

Oracle/Sun

Servers

Blade systems, rack mount, towers to desktop

Blade systems, rack mount, towers to desktop

Virtual servers with VMware, servers via vBlock servers via Cisco

Blade systems, rack mount, towers to desktop

Blade systems, rack mount, towers to desktop

Services

HP managed services, consulting and hosting supplemented by EDS acquisition

Bought Perot systems (an EDS spin off/out)

Partnered with various organizations and services

Has been doing smaller acquisitions adding tools and capabilities to IBM global services

Large internal consulting and services as well as Software as a Service (SaaS) hosting, partnered with others

Enterprise storage

XP (FC, iSCSI, FICON for mainframe and NAS with gateway) which is OEMed from Hitachi Japan parent of HDS

3PAR (iSCSI and FICON or NAS with gateway) replaces EMC CLARiiON or perhaps rare DMX/VMAX at high end?

DMX and VMAX

DS8000

Sun resold HDS version of XP/USP however Oracle has since dropped it from lineup

Data footprint impact reduction

Dedupe on VTL via Sepaton plus HP developed technology or OEMed products

Dedupe in OEM or partner software or hardware solutions, recently acquired Ocarina

Dedupe in Avamar, Datadomain, Networker, Celerra, Centera, Atmos. CLARiiON and Celerra compression

Dedupe in various hardware and software solutions, source and target, compression with Storwize

Dedupe via OEM VTLs and other sun solutions

Data preservation

Database and other archive tools, archive storage

OEM solutions from EMC and others

Centera and other solutions

Various hardware and software solutions

Various hardware and software solutions

General data protection (excluding logical or physical security and DLP)

Internal Data Protector software plus OEM, partners with other software, various VTL, TL and target solutions as well as services

OEM and resell partner tools as well as Dell target devices and those of partners. Could this be a future acquisition target area?

Networker and Avamar software, Datadomain and other targets, DPA management tools and Mozy services

Tivoli suite of software and various hardware targets, management tools and cloud services

Various software and partners tools, tape libraries, VTLs and online storage solutions

Scale out, bulk, or clustered NAS

eXtreme scale out, bulk and clustered storage for unstructured data applications

Exanet on Dell servers with shared SAS, iSCSI or FC storage

Celerra and ATMOS

IBM SONAS or N series (OEM from NetApp)

ZFS based solutions including 7000 series

General purpose NAS

Various gateways for EVA or MSA or XP, HP IBRIX or Polyserve based as well as Microsoft WSS solutions

EMC Celerra, Dell Exanet, Microsoft WSS based. Acquisition or partner target area?

Celerra

N Series OEMed from Netapp as well as growing awareness of SONAS

ZFS based solutions. Whatever happened to Procom?

Mid market multi protocol block

EVA (FC with iSCSI or NAS gateways), LeftHand (P Series iSCSI) for lowered of this market

3PAR (FC and iSCSI, NAS with gateway) for mid to upper end of this market, EqualLogic (iSCSI) for the lower end of the market, some residual EMC CX activity phases out over time?

CLARiiON (FC and iSCSI with NAS via gateway), Some smaller DMX or VMAX configurations for mid to upper end of this market

DS5000, DS4000 (FC and iSCSI with NAS via a gateway) both OEMed from LSI, XIV and N series (Netapp)

7000 series (ZFS and Sun storage software running on Sun server with internal storage, optional external storage)

6000 series

Scalable SMB iSCSI

LeftHand (P Series)

EqualLogic

Celerra NX, CLARiiON AX/CX

XIV, DS3000, N Series

2000
7000

Entry level shared block

MSA2000 (iSCSI, FC, SAS)

MD3000 (iSCSI, FC, SAS)

AX (iSCSI, FC)

DS3000 (iSCSI, FC, SAS), N Series (iSCSI, FC, NAS)

2000
7000

Entry level unified multi function

X (not to be confused with eXtreme series) HP servers with Windows Storage Software

Dell servers with Windows Storage Software or EMC Celerra

Celerra NX, Iomega

xSeries servers with Microsoft or other software installed

ZFS based solutions running on Sun servers

Low end SOHO

X (not to be confused with eXtreme series) HP servers with Windows Storage Software

Dell servers with storage and Windows Storage Software. Future acqustion area perhaps?

Iomega

 

 

Table 1: Sampling of various tiers, architectures, functionality and storage solution options

Clarifying some of the above categories in table 1:

Servers: Application servers or computers running Windows, Linux, HyperV, VMware or other applications, operating systems and hypervisors.

Services: Professional and consulting services, installation, break fix repair, call center, hosting, managed services or cloud solutions

Enterprise storage: Large scale (hundreds to thousands of drives, many front end as well as back ports, multiple controllers or storage processing engines (nodes), large amount of cache and equally strong performance, feature rich functionality, resilient and scalable.

Data footprint impact reduction: Archive, data management, compression, dedupe, thin provision among other techniques. Read more here and here.

Data preservation: Archiving for compliance and non regulatory applications or data including software, hardware, services.

General data protection: Excluding physical or logical data security (firewalls, dlp, etc), this would be backup/restore with encryption, replication, snapshots, hardware and software to support BC, DR and normal business operations. Read more about data protection options for virtual and physical storage here.

Scale out NAS: Clustered NAS, bulk unstructured storage, cloud storage system or file system. Read more about clustered storage here. HP has their eXtreme X series of scale out and bulk storage systems as well as gateways. These leverage IBRIX and Polyserve which were bought by HP as software, or as a solution (HP servers, storage and software), perhaps with optional data reduction software such as Ocarina OEMed by Dell. Dell now has Exanet which they bought recently as software, or as a solution running on Dell servers, with either SAS, iSCSI or FC back end storage plus optional data footprint reduction software such as Ocarina. IBM has GPFS as a software solution running on IBM or other vendors servers with attached storage, or as a solution such as SONAS with IBM servers running software with IBM DS mid range storage. IBM also OEMs Netapp as the N series.

General purpose NAS: NAS (NFS and CIFS or optional AFP and pNFS) for everyday enterprise (or SME/SMB) file serving and sharing

Mid market multi protocol block: For SMB to SME environments that need scalable shared (SAN) scalable block storage using iSCSI, FC or FCoE

Scalable SMB iSCSI: For SMB to SME environments that need scalable iSCSI storage with feature rich functionality including built in virtualization

Entry level shared block: Block storage with flexibility to support iSCSI, SAS or Fibre Channel with optional NAS support built in or available via a gateway. For example external SAS RAID shared storage between 2 or more servers configured in a HyeprV or VMware clustered that do not need or can afford higher cost of iSCSI. Another example would be shared SAS (or iSCSI or Fibre Channel) storage attached to a server running storage software such as clustered file system (e.g. Exanet) or VTL, Dedupe, Backup, Archiving or data footprint reduction tools or perhaps database software where higher cost or complexity of an iSCSI or Fibre Channel SAN is not needed. Read more about external shared SAS here.

Entry level unified multifunction: This is storage that can do block and file yet is scaled down to meet ease of acquisition, ease of sale, channel friendly, simplified deployment and installation yet affordable for SMBs or larger SOHOs as well as ROBOs.

Low end SOHO: Storage that can scale down to consumer, prosumer or lower end of SMB (e.g. SOHO) providing mix of block and file, yet priced and positioned below higher price multifunction systems.

Wait a minute, are that too many different categories or types of storage?

Perhaps, however it also enables multiple tools (tiers of technologies) to be in a vendors tool box, or, in an IT professionals tool bin to address different challenges. Lets come back to this in a few moments.

 

Some Industry trends and perspectives (ITP) thoughts:

How can Dell with 3PAR be an enterprise play without IBM mainframe FICON support?
Some would say forget about it, mainframes are dead thus not a Dell objective even though EMC, HDS and IBM sell a ton of storage into those environments. However, fair enough argument and one that 3PAR has faced for years while competing with EMC, HDS, HP, IBM and Fujitsu thus they are versed in how to handle that discussion. Thus the 3PAR teams can help the Dell folks determine where to hunt and farm for business something that many of the Dell folks already know how to do. After all, today they have to flip the business to EMC or worse.

If truly pressured and in need, Dell could continue reference sales with EMC for DMX and VMAX. Likewise they could also go to Bustech and/or Luminex who have open systems to mainframe gateways (including VTL support) under a custom or special solution sale. Ironically EMC has OEMed in the past Bustech to transform their high end storage into Mainframe VTLs (not to be confused with Falconstor or Quantum for open system) as well as Datadomain partnered with Luminex.

BTW, did you know that Dell has had for several years a group or team that handles specialized storage solutions addressing needs outside the usual product portfolio?

Thus IMHO Dells enterprise class focus will be that for open systems large scale out where they will compete with EMC DMX and VMAX, HDS USP or their soon to be announced enhancements, HP and their Hitachi Japan OEMed XP, IBM and the DS8000 as well as the seldom heard about yet equally scalable Fujitsu Eternus systems.

 

Why only 1.15B, after all they paid 1.4B for EqualLogic?
IMHO, had this deal occurred a couple of years ago when some valuations were still flying higher than today, and 3PAR were at their current sales run rate, customer deployment situations, it is possible the amount would have been higher, either way, this is still a great value for both Dell and 3PAR investors, customers, employees and partners.

 

Does this mean Dell dumps EMC?
Near term I do not think Dell dumps the EMC dudes (or dudettes) as there is still plenty of business in the mid market for the two companies. However, over time, I would expect that Dell will unleash the 3PAR folks into the space where normally a CLARiiON CX would have been positioned such as deals just above where EqualLogic plays, or where Fibre Channel is preferred. Likewise, I would expect Dell to empower the 3PAR team to go after additional higher end deals where a DMX or VMAX would have been the previous option not to mention where 3PAR has had success.

This would also mean extending into sales against HP EVA and XPs, IBM DS5000 and DS8000 as well as XIV, Oracle/Sun 6000 and 7000s to name a few. In other words there will be some spin around coopition, however longer term you can read the writing on the wall. Oh, btw, lest you forget, Dell is first and foremost a server company who now is getting into storage in a much bigger way and EMC is first and foremost a storage company who is getting into severs via VMware as well as their Cisco partnerships.

Are shots being fired across each other bows? I will leave that up to you to speculate.

 

Does this mean Dell MD1000/MD3000 iSCSI, SAS and FC disappears?
I do not think so as they have had a specific role for entry level below where the EqualLogic iSCSI only solution fits providing mixed iSCSI, SAS and Fibre Channel capabilities to compete with the HP MSA2000 (OEMed by Dothill) and IBM DS3000 (OEMed from LSI). While 3PAR could be taken down into some of these markets, which would also potentially dilute the brand and thus premium margin of those solutions.

Likewise, there is a play with server vendors to attach shared SAS external storage to small 2 and 4 node clusters for VMware, HyperV, Exchange, SQL, SharePoint and other applications where iSCSI or Fibre Channel are to expensive or not needed or where NAS is not a fit. Another play for the shared external SAS attached is for attaching low cost storage to scale out clustered NAS or bulk storage where software such as Exanet runs on a Dell server. Take a closer look at how HP is supporting their scale out as well as IBM and Oracle among others. Sure you can find iSCSI or Fibre Channel or even NAS back end to file servers. However growing trend of using shared SAS.

 

Does Dell now have too many different storage systems and solutions in their portfolio?
Possibly depending upon how you look at it and certainly the potential is there for revenue prevention teams to get in the way of each other instead of competing with external competitors. However if you compare the Dell lineup with those of EMC, HP, IBM and Oracle/Sun among others, it is not all that different. Note that HP, IBM and Oracle also have something in common with Dell in that they are general IT resource providers (servers, storage, networks, services, hardware and software) as compared to other traditional storage vendors.

Consequently if you look at these vendors in terms of their different markets from consumer to prosumer to SOHO at the low end of the SMB to SME that sits between SMB and enterprise, they have diverse customer needs. Likewise, if you look at these vendors server offerings, they too are diverse ranging from desktops to floor standing towers to racks, high density racks and blade servers that also need various tiers, architectures, price bands and purposed storage functionality.

 

What will be key for Dell to make this all work?
The key for Dell will be similar to that of their competitors which is to clearly communicate the value proposition of the various products or solutions, where, who and what their target markets are and then execute on those plans. There will be overlap and conflict despite the best spin as is always the case with diverse portfolios by vendors.

However if Dell can keep their teams focused on expanding their customer footprints at the expense of their external competition vs. cannibalizing their own internal product lines, not to mention creating or extending into new markets or applications. Consequently Dell now has many tools in their tool box and thus need to educate their solution teams on what to use or sell when, where, why and how instead of just having one tool or a singular focus. In other words, while a great solution, Dell no longer has to respond with the solution to everything is iSCSI based EqualLogic.

Likewise Dell can leverage the same emotion and momentum behind the EqualLogic teams to invigorate and unleash the best with 3PAR teams and solution into or onto the higher end of the SMB, SME and enterprise environments.

Im still thinking that Exanet is a diamond in the rough for Dell where they can install the clustered scalable NAS software onto their servers and use either lower end shared SAS RAID (e.g. MD3000), or iSCSI (MD3000, EqualLogic or 3PAR) or higher end Fibre Channel with 3PAR) for scale out, cloud and other bulk solutions competing with HP, Oracle and IBM. Dell still has the Windows based storage server for entry level multi protocol block and file capabilities as well as what they OEM from EMC.

 

Is Dell done shopping?
IMHO I do not think so as there are still areas where Dell can extend their portfolio and not just in storage. Likewise there are still some opportunities or perhaps bargains out there for fall and beyond acquisitions.

 

Does this mean that Dell is not happy with EqualLogic and iSCSI
Simply put from my perspective talking with Dell customers, prospects, and partners and seeing them all in action nothing could be further from Dell not being happy with iSCSI or EqualLogic. Look at this as being a way to extend the Dell story and capabilities into new markets, granted the EqualLogic folks now have a new sibling to compete with internal marketing and management for love and attention.

 

Isnt Dell just an iSCSI focused company?
A couple of years I was quoted in one of the financial analysis reports as saying that Dell needed to remain open to various forms of storage instead of becoming singularly focused on just iSCSI as a result of the EqualLogic deal. I standby that statement in that Dell to be a strong enterprise contender needs to have a balanced portfolio across different price or market bands, from block to file, from shared SAS to iSCSI to Fibre Channel and emerging FCoE.

This also means supporting traditional NAS across those different price band or market sectors as well as support for emerging and fast growing unstructured data markets where there is a need for scale out and bulk storage. Thus it is great to see Dell remaining open minded and not becoming singularly focused on just iSCSI instead providing the right solution to meet their diverse customer as well as prospect needs or opportunities.

While EqualLogic was and is a very successfully iSCSI focused storage solution not to mention one that Dell continues to leverage, Dell is more than just iSCSI. Take a look at Dells current storage line up as well as up in table 1 and there is a lot of existing diversity. Granted some of that current diversity is via partners which the 3PAR deal helps to address. What this means is that iSCSI continues to grow in popularity however there are other needs where shared SAS or Fibre Channel or FCoE will be needed opening new markets to Dell.

 

Bottom line and wrap up (for now)
This is a great move for Dell (as well as 3PAR) to move up market in the storage space with less reliance on EMC. Assuming that Dell can communicate the what to use when, where, why and how to both their internal teams, partners as well as industry and customers not to mention then execute on, they should have themselves a winner.

Will this deal end up being an even better bargain than when Dell paid $1.4B for EqualLogic?

Not sure yet, it certainly has potential if Dell can execute on their plans without losing momentum in any other their other areas (products).

Whats your take?

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Here are some related links to read more

July 2010 Odds and Ends: Perspectives, Tips and Articles

Here are some items that have been added to the main StorageIO website news, tips and articles, video podcast related pages that pertain to a variety of topics ranging from data storage, IO, networking, data centers, virtualization, Green IT, performance, metrics and more.

These content items include various odds and end pieces such as industry or technology commentary, articles, tips, ATEs (See additional ask the expert tips here) or FAQs as well as some video and podcasts for your mid summer (if in the northern hemisphere) enjoyment.

The New Green IT: Productivity, supporting growth, doing more with what you have

Energy efficient and money saving Green IT or storage optimization are often associated to mean things like MAID, Intelligent Power Management (IPM) for servers and storage disk drive spin down or data deduplication. In other words, technologies and techniques to minimize or avoid power consumption as well as subsequent cooling requirements which for some data, applications or environments can be the case. However there is also shifting from energy avoidance to that of being efficient, effective, productive not to mention profitable as forms of optimization. Collectively these various techniques and technologies help address or close the Green Gap and can reduce the amount of Green IT confusion in the form of boosting productivity (same goes for servers or networks) in terms of more work, IOPS, bandwidth, data moved, frames or packets, transactions, videos or email processed per watt per second (or other unit of time).

Click here to read and listen to my comments about boosting IOPs per watt, or here to learn more about the many facets of energy efficient storage and here on different aspects of storage optimization. Want to read more about the next major wave of server, storage, desktop and networking virtualization? Then click here to read more about virtualization life beyond consolidation where the emphasis or focus expands to abstraction, transparency, enablement in addition to consolidation for servers, storage, networks. If you are interested in metrics and measurements, Storage Resource Management (SRM) not to mention discussion about various macro data center metrics including PUE among others, click on the preceding links.

NAS and Shared Storage, iSCSI, DAS, SAS and more

Shifting gears to general industry trends and commentary, here are some comments on consumer and SOHO storage sharing, the role and importance Value Added Resellers (VARs) serve for SMB environments, as well as the top storage technologies that are in use and remain relevant. Here are some comments on iSCSI which continues to gain in popularity as well as storage options for small businesses.

Are you looking to buy or upgrade a new server? Here are some vendor and technology neutral tips to help determine needs along with requirements to help be a more effective informed buyer. Interested or do you want to know more about Serial Attached SCSI (6Gb/s SAS) including for use as external shared direct attached storage (DAS) for Exchange, Sharepoint, Oracle, VMware or HyperV clusters among other usage scenarios, check out this FAQ as well as podcast. Here are some other items including a podcast about using storage partitions in your data storage infrastructure, an ATE about what type of 1.5TB centralized storage to support multiple locations, and a video on scaling with clustered storage.

That is all for now, hope all is well and enjoy the content.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Industry Trends and Perspectives: Tiered Storage, Systems and Mediums

This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageio.com/reports.

Two years ago we read about how the magnetic disk drive would be dead in a couple of years at the hand of flash SSD. Guess what, it is a couple of years later and the magnetic disk drive is far from being dead. Granted high performance Fibre Channel disks will continue to be replaced by high performance, small form factor 2.5" SAS drives along with continued adoption of high capacity SAS and SATA devices.

Likewise, SSD or flash drives continue to be deployed, however outside of iPhone, iPod and other consumer or low end devices, nowhere near the projected or perhaps hoped for level. Rest assured the trend Im seeing and hearing from IT customers is that some will continue to look for places to strategically deploy SSD where possible, practical and affordable, there will continue to be a roll for disk and even tape devices on a go forward basis.

Also watch for more coverage and discussion around the emergence of the Hybrid Hard Disk Drive (HHDD) that was discussed about four to five years ago. The HHDD made an appearance and then quietly went away for some time, perhaps more R and D time in the labs while flash SSD garnered the spotlight.

There could be a good opportunity for HHDD technology leveraging the best of both worlds that is continued pricing decreases for disk with larger capacity using smaller yet more affordable amounts of flash in a solution that is transparent to the server or storage controller making for easier integration.

Related and companion material:
Blog: ILM = Has It Losts its Meaning
Blog: SSD and Storage System Performance
Blog: Has SSD put Hard Disk Drives (HDDs) On Endangered Species List
Blog: Optimize Data Storage for Performance and Capacity Efficiency

That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Industry Trends and Perspectives: RAID Rebuild Rates

This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageio.com/reports.

There is continued concern about how long large capacity disk drives take to be rebuilt in RAID sets particularly as the continued shift from 1TB to 2TB occurs. It should not be a surprise that a disk with more capacity will take longer to rebuild or copy as well as with more drives; the likely hood of one failing statistically increases.

Not to diminish the issue, however also to avoid saying the sky is falling, we have been here before! In the late 90s and early 2000s there was a similar concern with the then large 9GB, 18GB let alone emerging 36GB and 72GB drives. There have been improvements in RAID as well as rebuild algorithms along with other storage system software or firmware enhancements not to mention boost in processor or IO bus performance.

However not all storage systems are equal even if they use the same underlying processors, IO busses, adapters or disk drives. Some vendors have made significant improvements in their rebuild times where each generation of software or firmware can reconstruct a failed drive faster. Yet for others, each subsequent iteration of larger capacity disk drives brings increased rebuild times.

If disk drive rebuild times are a concern, ask your vendor or solution provider what they are doing as well as have done over the past several years to boost their performance. Look for signs of continued improvement in rebuild and reconstruction performance as well as decrease in error rates or false drive rebuilds.

Related and companion material:
Blog: RAID data protection remains relevant
Blog: Optimize Data Storage for Performance and Capacity Efficiency

That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Industry Trends and Perspectives: 6GB SAS and DAS are not Dumb A$$ Storage

Blog: Industry Trends and Perspectives: 6GB SAS and DAS are not Dumb A$$ Storage

This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageio.com/reports.

With 6G that increases performance as well as connectivity flexibility, more servers are supporting SAS natively while storage system continue to add support for 3.5" and 2.5" small form factor high performance and large capacity SAS drives. Shared SAS DAS storage systems are being deployed for consolidation attached to two or more servers as well as for clustered solutions.

Another area where shared SAS DAS storage is being deployed is in cloud, scale out NAS and bulk storage environments as a price performance alternative to iSCSI or Fibre Channel solutions.

Keep an eye on these and other trends including converged systems, server, storage and networking management along with associated tools.

Related and companion material:
Article: Green and SASy = Energy and Economic, Effective Storage
Article: The Many Faces of SAS – Beyond the DAS Factor

That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Seagate to say goodbye to Cayman Islands, Hello Ireland

Seagate (NASDQ: STX) corporation, the parent of the company many people in IT and data storage in particular know as Seagate the disk drive manufacturer is moving their paper headquarters from the Cayman Islands where they have been based since 2000 to Ireland.

Let me rephrase that as Seagate is not moving their Scotts Valley California headquarters of operations or any design, manufacturing or marketing to Ireland that is not already there. Rather, Seagate as a manufacturing company is moving where it is incorporated (paper corporate headquarters) from the Cayman Islands to the Emerald Island of Ireland.

Confused yet?
Do not worry, it is confusing at first. I ended up having to reread through the Seagate corporate material and remembering back to the late 1990s it all started to make sense. Seagate has over 50,000 employees located at facilities around the world including manufacturing, support, design, research and development, sales and marketing along with corporate administration among others.

Their business while focused on data storage currently is very much centered on magnetic disk drives with a much diversified portfolio including products obtained via their acquisition of Maxtor. The Seagate product portfolio includes among others high end enterprise class Fibre Channel and SAS 15,500 RPM (15.5K) high performance to high capacity SAS and SATA devices, 10K small form factor (SFF) to mid market, SMB, USB based SOHO, prosumer or consumer along with portable and specialized devices among many others including emerging SSD and hybrid devices.

However back in the late 1990s, Seagate ventured off into some other areas for a time being including owning (in part) Veritas (since divested and now part of Symantec), Xiotech (now back on its own under venture ownership including some tied to Seagate) among some other transactions. In a series of moves, merger and acquisition, divestures, restructuring, paper corporate headquarters that reads like something out of a Hollywood movie, Seagate ended up moving its place of incorporation to the Cayman Islands.

Seagate as it was known had essentially become the manufacturing company owned by a paper holding company incorporated off shore for business and tax purposes. Want to learn more, read the companies annual reports and other filings some of which can be found here.

The Business End of the Move
Without getting into the deep details of international finance, tax law or articles of business incorporation, many companies are actually incorporated in a location different from where they actualy have their headquarters. In the United States, that is often Delaware where corporations file their paper work for articles of incorporations and then locate their headquarters or primary place of business elsewhere.

Seagate SEC filings outlining move
Seagate SEC filing outlining proposed move

Outside of the United States, the Cayman Islands among other locations have been a popular location for companies to file their paper work and have a paper headquarters due to favorable tax rates and other business benefits. Perhaps you have even watched a movie or two where part of the plot involved some business transaction of a paper company located in the Cayman Island as a means of shelter business dealings. In the case of Seagate, in 2000 during a restricting their corporate (paper) headquarters was moved to the Cayman due to its favorable business climate including lower tax structure.

Dive Cayman Islands

Disclosure: While I am a certified and experienced PADI SCUBA Divemaster having visited many different venues, Cayman Island is not one of them. Likewise, while I have distant relatives never meet, I would live to visit Ireland sometime.

Why is Seagate saying goodbye to the nice warm climate of the Cayman Islands heading off to the emerald Isle?

Visit Ireland

Simple, a more favorable business climate that include international business and taxation benefits as well as Ireland is not coming under scrutiny as a tax haven by the U.S. and other governments as have the Cayman Islands (along with other locations). Let me also be clear that Seagate is not new to Ireland having had a presence there for some time (See here).

What does all of this mean?
From a technology perspective pretty much nothing as this appears to be mainly a business and financial move for the shareholders of Seagate. As for impact on shareholders, other than reading through some documents if so inclined, probably not much impact if any at all.

As for IT customers, their solution providers who are customers of Seagate this probably does not mean anything at all as it should be business as usual.

What about others parties, governments, countries or entities?

Tough to say if this is a trend of companies that will begin moving their paper headquarters from the Caymans to elsewhere so as to escape being in the spotlight of U.S. and other governments looking for additional revenues.

Perhaps a boon to Ireland if more companies decide to move their paper as well as actual company operations there as many have done over the past decades. Otherwise for the rest of us, it can make for interesting reading, conversations, speculation, debate and discussion.

And that is all that I have to say about this for now, what say you?

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Post Holiday IT Shopping Bargains, Dell Buying Exanet?

For consumers, the time leading up to the holiday Christmas season is usually busy including door busters as well as black Friday among other specials for purchasing gifts and other items. However savvy shoppers will wait for after Christmas or the holidays altogether perhaps well into the New Year when some good bargains can become available. IT customers are no different with budgets to use up before the end of the year thus a flurry of acquisitions that should become evident soon as we are entering earnings announcement season.

However there are also bargains for IT organizations looking to take advantage of special vendor promotions trying to stimulate sales, not to mention for IT vendors to do some shopping of their own. Consequently, in addition to the flurry of merger and acquisition (M and A) activity from last summer through the fall, there has been several recent deals, some of which might make Monty Hall blush!

Some recent acquisition activity include among others:

  • Dell bought Perot systems for $3.9B
  • DotHill bought Cloverleaf
  • Texas Memory Systems (TMS) bought Incipient
  • HP bought IBRIX and 3COM among others
  • LSI bought Onstor
  • VMware bought Zimbra
  • Micron bought Numonyx
  • Exar bought Neterion

Now the industry is abuzz about Dell, who is perhaps using some of the lose change left over from holiday sales as being in the process of acquiring Israeli clustered storage startup Exanet for about $12M USD. Compared to previous Dell acquisitions including EqualLogic in 2007 for about $1.4B or last years Perot deal in the $3.9B range, $12M is a bargain and would probably not even put a dent in the selling and marketing advertising budget let alone corporate cash coffers which as of their Q3-F10 balance sheet shows about $12.795B in cash.

Who is Exanet and what is their product solution?
Exanet is a small Israeli startup providing a clustered, scale out NAS file serving storage solution (Figure 1) that began shipping in 2003. The Exanet solution (ExaStore) can be either software based, or, as a package solution ExaStore software installed on standard x86 servers with external RAID storage arrays combining as a clustered NAS file server.

Product features include global name space, distributed metadata, expandable file systems, virtual volumes, quotas, snapshots, file migration, replication, and virus scanning, and load balancing, NFS, CIFS and AFP. Exanet scales up to 1 Exabyte of storage capacity along with supporting large files and billions of file per cluster.

The target market that Exanet pursues is large scale out NAS where performance (either small random or large sequential I/Os) along with capacity are required. Consequently, in the scale out, clustered NAS file serving space, competitors include IPM GPFS (SONAS), HP IBRIX or PolyServe, Sun Lustre and Symantec SFS among others.

Clustered Storage Model: Source The Green and Virtual Data Center (CRC)
Figure 1 Generic clustered storage model (Courtesy The Green and Virtual Data Center(CRC)

For a turnkey solution, Exanet packaged their cluster file system software with various vendors storage combined with 3rd party external Fibre Channel or other storage. This should play well for Dell who can package the Exanet software on its own servers as well as leverage either SAS or Fibre Channel  MD1000/MD3000 external RAID storage among other options (see more below).

Click here to learn more about clustered storage including clustered NAS, clustered and parallel file systems.

Dell

Whats the dell play?

  • Its an opportunity to acquire some intellectual property (IP)
  • Its an opportunity to have IP similar to EMC, HP, IBM, NetApp, Oracle and Symantec among others
  • Its an opportunity to address a market gap or need
  • Its an opportunity to sell more Dell servers, storage and services
  • Its an opportunity time for doing acquisitions (bargain shopping)

Note: IBM also this past week announced their new bundled scale out clustered NAS file serving solution based on GPFS called SONAS. HP has IBRIX in addition to their previous PolyServe acquisition, Sun has ZFS and Lustre.

How does Exanet fit into the Dell lineup?

  • Dell sells Microsoft based NAS as NX series
  • Dell has an OEM relationship with EMC
  • Dell was OEMing or reselling IBRIX in the past for certain applications or environments
  • Dell has needed to expand its NAS story to balance its iSCSI centric storage story as well as compliment its multifunction block storage solutions (e.g. MD3000) and server solutions.

Why Exanet?
Why Exanet, why not one of the other startups or small NAS or cloud file system vendors including BlueArc, Isilon, Panasas, Parascale, Reldata, OpenE or Zetta among others?

My take is that probably because those were either not relevant to what Dell is looking for, lack of seamless technology and business fit, technology tied to non Dell hardware, technology maturity, the investors are still expecting a premium valuation, or, some combination of the preceding.

Additional thoughts on why Exanet
I think that Dell simply saw an opportunity to acquire some intellectual property (IP) probably including a patent or two. The value of the patents could be in the form of current or future product offerings, perhaps a negotiating tool, or if nothing else as marketing tool. As a marketing tool, Dell via their EqualLogic acquisition among others has been able to demonstrate and generate awareness that they actually own some IP vs. OEM or resell those from others. I also think that this is an opportunity to either fill or supplement a solution offering that IBRIX provided to high performance, bulk storage and scale out file serving needs.

NAS and file serving supporting unstructured data are a strong growth market for commercial, high performance, specialized or research as well as small business environments. Thus, where EqualLogic plays to the iSCSI block theme, Dell needs to expand their NAS and file serving solutions to provide product diversity to meet various customer applications needs similar to what they do with block based storage. For example, while iSCSI based EqualLogic PS systems get the bulk of the marketing attention, Dell also has a robust business around the PowerVault MD1000/MD3000 (SAS/iSCSI/FC) and Microsoft multi protocol based PowerVault NX series not to mention their EMC CLARiiON based OEM solutions (E.g. Dell AX, Dell/EMC CX).

Thus, Dell can complement the Microsoft multi protocol (block and NAS file) NX with a packaged (Dell servers and MD (or other affordable block storage) powered with Exanet) solution. While it is possible that Dell will find a way to package Exanet as a NAS gateway in front of the iSCSI based EqualLogic PS systems, which would also make for an expensive scale out NAS solution compared to those from other vendors.

Thats it for now.

Lets see how this all plays out.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Technorati tags: Dell

2010 and 2011 Trends, Perspectives and Predictions: More of the same?

2011 is not a typo, I figured that since Im getting caught up on some things, why not get a jump as well.

Since 2009 went by so fast, and that Im finally getting around to doing an obligatory 2010 predictions post, lets take a look at both 2010 and 2011.

Actually Im getting around to doing a post here having already done interviews and articles for others soon to be released.

Based on prior trends and looking at forecasts, a simple predictions is that some of the items for 2010 will apply for 2011 as well given some of this years items may have been predicted by some in 2008, 2007, 2006, 2005 or, well ok, you get the picture. :)

Predictions are fun and funny in that for some, they are taken very seriously, while for others, at best they are taken with a grain of salt depending on where you sit. This applies both for the reader as well as who is making the predictions along with various motives or incentives.

Some are serious, some not so much…

For some, predictions are a great way of touting or promoting favorite wares (hard, soft or services) or getting yet another plug (YAP is a TLA BTW) in to meet coverage or exposure quota.

Meanwhile for others, predictions are a chance to brush up on new terms for the upcoming season of buzzword bingo games (did you pick up on YAP).

In honor of the Vancouver winter games, Im expecting some cool Olympic sized buzzword bingo games with a new slippery fast one being federation. Some buzzwords will take a break in 2010 as well as 2011 having been worked pretty hard the past few years, while others that have been on break, will reappear well rested, rejuvenated, and ready for duty.

Lets also clarify something regarding predictions and this is that they can be from at least two different perspectives. One view is that from a trend of what will be talked about or discussed in the industry. The other is in terms of what will actually be bought, deployed and used.

What can be confusing is sometimes the two perspectives are intermixed or assumed to be one and the same and for 2010 I see that trend continuing. In other words, there is adoption in terms of customers asking and investigating technologies vs. deployment where they are buying, installing and using those technologies in primary situations.

It is safe to say that there is still no such thing as an information, data or processing recession. Ok, surprise surprise; my dogs could have probably made that prediction during a nap. However what this means is more data will need to be moved, processed and stored for longer periods of time and at a lower cost without degrading performance or availability.

This means, denser technologies that enable a lower per unit cost of service without negatively impacting performance, availability, capacity or energy efficiency will be needed. In other words, watch for an expanded virtualization discussion around life beyond consolidation for servers, storage, desktops and networks with a theme around productivity and virtualization for agility and management enablement.

Certainly there will be continued merger and acquisitions on both a small as well as large scale ranging from liquidation sales or bargain hunting, to large and a mega block buster or two. Im thinking in terms of outside of the box, the type that will have people wondering perhaps confused as to why such a deal would be done until the whole picture is reveled and thought out.

In other words, outside of perhaps IBM, HP, Oracle, Intel or Microsoft among a few others, no vendor is too large not to be acquired, merged with, or even involved in a reverse merger. Im also thinking in terms of vendors filling in niche areas as well as building out their larger portfolio and IT stacks for integrated solutions.

Ok, lets take a look at some easy ones, lay ups or slam dunks:

  • More cluster, cloud conversations and confusion (public vs. private, service vs. product vs. architecture)
  • More server, desktop, IO and storage consolidation (excuse me, server virtualization)
  • Data footprint impact reduction ranging from deletion to archive to compress to dedupe among others
  • SSD and in particular flash continues to evolve with more conversations around PCM
  • Growing awareness of social media as yet another tool for customer relations management (CRM)
  • Security, data loss/leap prevention, digital forensics, PCI (payment card industry) and compliance
  • Focus expands from gaming/digital surveillance /security and energy to healthcare
  • Fibre Channel over Ethernet (FCoE) mainstream in discussions with some initial deployments
  • Continued confusion of Green IT and carbon reduction vs. economic and productivity (Green Gap)
  • No such thing as an information, data or processing recession, granted budgets are strained
  • Server, Storage or Systems Resource Analysis (SRA) with event correlation
  • SRA tools that provide and enable automation along with situational awareness

The green gap of confusion will continue with carbon or environment centric stories and messages continue to second back stage while people realize the other dimension of green being productivity.

As previously mentioned, virtualization of servers and storage continues to be popular with an expanding focus from just consolidation to one around agility, flexibility and enabling production, high performance or for other systems that do not lend themselves to consolidation to be virtualized.

6GB SAS interfaces as well as more SAS disk drives continue to gain popularity. I have said in the past there was a long shot that 8GFC disk drives might appear. We might very well see those in higher end systems while SAS drives continue to pick up the high performance spinning disk role in mid range systems.

Granted some types of disk drives will give way over time to others, for example high performance 3.5” 15.5K Fibre Channel disks will give way to 2.5” 15.5K SAS boosting densities, energy efficiency while maintaining performance. SSD will help to offload hot spots as they have in the past enabling disks to be more effectively used in their applicable roles or tiers with a net result of enhanced optimization, productivity and economics all of which have environmental benefits (e.g. the other Green IT closing the Green Gap).

What I dont see occurring, or at least in 2010

  • An information or data recession requiring less server, storage, I/O networking or software resources
  • OSD (object based disk storage without a gateway) at least in the context of T10
  • Mainframes, magnetic tape, disk drives, PCs, or Windows going away (at least physically)
  • Cisco cracking top 3, no wait, top 5, no make that top 10 server vendor ranking
  • More respect for growing and diverse SOHO market space
  • iSCSI taking over for all I/O connectivity, however I do see iSCSI expand its footprint
  • FCoE and flash based SSD reaching tipping point in terms of actual customer deployments
  • Large increases in IT Budgets and subsequent wild spending rivaling the dot com era
  • Backup, security, data loss prevention (DLP), data availability or protection issues going away
  • Brett Favre and the Minnesota Vikings winning the super bowl

What will be predicted at end of 2010 for 2011 (some of these will be DejaVU)

  • Many items that were predicted this year, last year, the year before that and so on…
  • Dedupe moving into primary and online active storage, rekindling of dedupe debates
  • Demise of cloud in terms of hype and confusion being replaced by federation
  • Clustered, grid, bulk and other forms of scale out storage grow in adoption
  • Disk, Tape, RAID, Mainframe, Fibre Channel, PCs, Windows being declared dead (again)
  • 2011 will be the year of Holographic storage and T10 OSD (an annual prediction by some)
  • FCoE kicks into broad and mainstream deployment adoption reaching tipping point
  • 16Gb (16GFC) Fibre Channel gets more attention stirring FCoE vs. FC vs. iSCSI debates
  • 100GbE gets more attention along with 4G adoption in order to move more data
  • Demise of iSCSI at the hands of SAS at low end, FCoE at high end and NAS from all angles

Gaining ground in 2010 however not yet in full stride (at least from customer deployment)

  • On the connectivity front, iSCSI, 6Gb SAS, 8Gb Fibre Channel, FCoE and 100GbE
  • SSD/flash based storage everywhere, however continued expansion
  • Dedupe  everywhere including primary storage – its still far from its full potential
  • Public and private clouds along with pNFS as well as scale out or clustered storage
  • Policy based automated storage tiering and transparent data movement or migration
  • Microsoft HyperV and Oracle based server virtualization technologies
  • Open source based technologies along with heterogeneous encryption
  • Virtualization life beyond consolidation addressing agility, flexibility and ease of management
  • Desktop virtualization using Citrix, Microsoft and VMware along with Microsoft Windows 7

Buzzword bingo hot topics and themes (in no particular order) include:

  • 2009 and previous year carry over items including cloud, iSCSI, HyperV, Dedupe, open source
  • Federation takes over some of the work of cloud, virtualization, clusters and grids
  • E2E, End to End management preferably across different technologies
  • SAS, Serial Attached SCSI for server to storage systems and as disk to storage interface
  • SRA, E23, Event correlation and other situational awareness related IRM tools
  • Virtualization, Life beyond consolidation enabling agility, flexibility for desktop, server and storage
  • Green IT, Transitions from carbon focus to economic with efficiency enabling productivity
  • FCoE, Continues to evolve and mature with more deployments however still not at tipping point
  • SSD, Flash based mediums continue to evolve however tipping point is still over the horizon
  • IOV, I/O Virtualization for both virtual and non virtual servers
  • Other new or recycled buzzword bingo candidates include PCoIP, 4G,

RAID will again be pronounced as being dead no longer relevant yet being found in more diverse deployments from consumer to the enterprise. In other words, RAID may be boring and thus no longer relevant to talk about, yet it is being used everywhere and enhanced in evolutionary ways, perhaps for some even revolutionary.

Tape remains being declared dead (e.g. on the Zombie technology list) yet being enhanced, purchased and utilized at higher rates with more data stored than in past history. Instead of being killed off by the disk drive, tape is being kept around for both traditional uses as well as taking on new roles where it is best suited such as long term or bulk off-line storage of data in ultra dense and energy efficient not to mention economical manners.

What I am seeing and hearing is that customers using tape are able to reduce the number of drives or transports, yet due to leveraging disk buffers or caches including from VTL and dedupe devices, they are able to operate their devices at higher utilization, thus requiring fewer devices with more data stored on media than in the past.

Likewise, even though I have been a fan of SSD for about 20 years and am bullish on its continued adoption, I do not see SSD killing off the spinning disk drive anytime soon. Disk drives are helping tape take on this new role by being a buffer or cache in the form of VTLs, disk based backup and bulk storage enhanced with compression, dedupe, thin provision and replication among other functionality.

There you have it, my predictions, observations and perspectives for 2010 and 2011. It is a broad and diverse list however I also get asked about and see a lot of different technologies, techniques and trends tied to IT resources (servers, storage, I/O and networks, hardware, software and services).

Lets see how they play out.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

RAID Relevance Revisited

Following up from some previous posts on the topic, a continued discussion point in the data storage industry is the relevance (or lack there) of RAID (Redundant Array of Independent Disks).

These discussions tend to evolve around how RAID is dead due to its lack of real or perceived ability to continue scaling in terms of performance, availability, capacity, economies or energy capabilities needed or when compared to those of newer techniques, technologies or products.

RAID Relevance

While there are many new and evolving approaches to protecting data in addition to maintaining availability or accessibility to information, RAID despite the fan fare is far from being dead at least on the technology front.

Sure, there are issues or challenges that require continued investing in RAID as has been the case over the past 20 years; however those will also be addressed on a go forward basis via continued innovation and evolution along with riding technology improvement curves.

Now from a marketing standpoint, ok, I can see where the RAID story is dead, boring, and something new and shiny is needed, or, at least change the pitch to sound like something new.

Consequently, when being long in the tooth and with some of the fore mentioned items among others, older technologies that may be boring or lack sizzle or marketing dollars can and often are declared dead on the buzzword bingo circuit. After all, how long now has the industry trade group RAID Advisory Board (RAB) been missing in action, retired, spun down, archived or ILMed?

RAID remains relevant because like other dead or zombie technologies it has reached the plateau of productivity and profitability. That success is also something that emerging technologies envy as their future domain and thus a classic marketing move is to declare the incumbent dead.

The reality is that RAID in all of its various instances from hardware to software, standard to non-standard with extensions is very much alive from the largest enterprise to the SMB to the SOHO down into consumer products and all points in between.

Now candidly, like any technology that is about 20 years old if not older after all, the disk drive is over 50 years old and been declared dead for how long now?.RAID in some ways is long in the tooth and there are certainly issues to be addressed as have been taken care of in the past. Some of these include the overhead of rebuilding large capacity 1TB, 2TB and even larger disk drives in the not so distant future.

There are also issues pertaining to distributed data protection in support of cloud, virtualized or other solutions that need to be addressed. In fact, go way way back to when RAID appeared commercially on the scene in the late 80s and one of the value propositions among others was to address the reliability of emerging large capacity multi MByte sized SCSI disk drives. It seems almost laughable today that when a decade later, when the 1GB disk drives appeared in the market back in the 90s that there was renewed concern about RAID and disk drive rebuild times.

Rest assured, I think that there is a need and plenty of room for continued innovate evolution around RAID related technologies and their associated storage systems or packaging on a go forward basis.

What I find interesting is that some of the issues facing RAID today are similar to those of a decade ago for example having to deal with large capacity disk drive rebuild, distributed data protecting and availability, performance, ease of use and so the list goes.

However what happened was that vendors continued to innovate both in terms of basic performance accelerated rebuild rates with improvements to rebuild algorithms, leveraged faster processors, busses and other techniques. In addition, vendors continued to innovate in terms of new functionality including adopting RAID 6 which for the better part of a decade outside of a few niche vendors languished as one of those future technologies that probably nobody would ever adopt, however we know that to be different now and for the past several years. RAID 6 is one of those areas where vendors who do not have it are either adding it, enhancing it, or telling you why you do not need it or why it is no good for you.

An example of how RAID 6 is being enhanced is boosting performance on normal read and write operations along with acceleration of performance during disk rebuild. Also tied to RAID 6 and disk drive rebuild are improvements in controller design to detect and proactively make repairs on the fly to minimize or eliminate errors or diminished the need for drive rebuilds, similar to what was done in previous generations. Lets also not forget the improvements in disk drives boosting performance, availability, capacity and energy improvements over time.

Funny how these and other enhancements are similar to those made to RAID controllers hardware and software fine tuning them in the early to mid 2000s in support for high capacity SATA disk drives that had different RAS characteristics of higher performance lower capacity enterprise drives.

Here is my point.

RAID to some may be dead while others continue to rely on it. Meanwhile others are working on enhancing technologies for future generations of storage systems and application requirements. Thus in different shapes, forms, configurations, feature; functionality or packaging, the spirit of RAID is very much alive and well remaining relevant.

Regardless of if a solution using two or three disk mirroring for availability, or RAID 0 fast SSD or SAS or FC disks in a stripe configuration for performance with data protection via rapid restoration from some other low cost medium (perhaps RAID 6 or tape), or perhaps single, dual or triple parity protection, or if using small block or multiMByte or volume based chunklets, let alone if it is hardware or software based, local or disturbed, standard or non standard, chances are there is some theme of RAID involved.

Granted, you do not have to call it RAID if you prefer!

As a closing thought, if RAID were no longer relevant, than why do the post RAID, next generation, life beyond RAID or whatever you prefer to call them technologies need to tie themselves to the themes of RAID? Simple, RAID is still relevant in some shape or form to different audiences as well as it is a great way of stimulating discussion or debate in a constantly evolving industry.

BTW, Im still waiting for the revolutionary piece of hardware that does not require software, and the software that does not require hardware and that includes playing games with server less servers using hypervisors :) .

Provide your perspective on RAID and its relevance in the following poll.

Here are some additional related and relevant RAID links of interests:

Stay tuned for more about RAIDs relevance as I dont think we have heard the last on this.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Optimize Data Storage for Performance and Capacity Efficiency

This post builds on a recent article I did that can be read here.

Even with tough economic times, there is no such thing as a data recession! Thus the importance of optimizing data storage efficiency addressing both performance and capacity without impacting availability in a cost effective way to do more with what you have.

What this means is that even though budgets are tight or have been cut resulting in reduced spending, overall net storage capacity is up year over year by double digits if not higher in some environments.

Consequently, there is continued focus on stretching available IT and storage related resources or footprints further while eliminating barriers or constraints. IT footprint constraints can be physical in a cabinet or rack as well as floorspace, power or cooling thresholds and budget among others.

Constraints can be due to lack of performance (bandwidth, IOPS or transactions), poor response time or lack of availability for some environments. Yet for other environments, constraints can be lack of capacity, limited primary or standby power or cooling constraints. Other constraints include budget, staffing or lack of infrastructure resource management (IRM) tools and time for routine tasks.

Look before you leap
Before jumping into an optimization effort, gain insight if you do not already have it as to where the bottlenecks exist, along with the cause and effect of moving or reconfiguring storage resources. For example, boosting capacity use to more fully use storage resources can result in a performance issue or data center bottlenecks for other environments.

An alternative scenario is that in the quest to boost performance, storage is seen as being under-utilized, yet when capacity use is increased, low and behold, response time deteriorates. The result can be a vicious cycle hence the need to address the issue as opposed to moving problems by using tools to gain insight on resource usage, both space and activity or performance.

Gaining insight means looking at capacity use along with performance and availability activity and how they use power, cooling and floor-space. Consequently an important tool is to gain insight and knowledge of how your resources are being used to deliver various levels of service.

Tools include storage or system resource management (SRM) tools that report on storage space capacity usage, performance and availability with some tools now adding energy usage metrics along with storage or system resource analysis (SRA) tools.

Cooling Off
Power and cooling are commonly talked about as constraints, either from a cost standpoint, or availability of primary or secondary (e.g. standby) energy and cooling capacity to support growth. Electricity is essential for powering IT equipment including storage enabling devices to do their specific tasks of storing data, moving data, processing data or a combination of these attributes.

Thus, power gets consumed, some work or effort to move and store data takes place and the by product is heat that needs to be removed. In a typical IT data center, cooling on average can account for about 50% of energy used with some sites using less.

With cooling being a large consumer of electricity, a small percentage change to how cooling consumes energy can yield large results. Addressing cooling energy consumption can be to discuss budget or cost issues, or to enable cooling capacity to be freed up to support installation of extra storage or other IT equipment.

Keep in mind that effective cooling relies on removing heat from as close to the source as possible to avoid over cooling which requires more energy. If you have not done so, have a facilities review or assessment performed that can range from a quick walk around, to a more in-depth review and thermal airflow analysis. A means of removing heat close to the sort are techniques such as intelligent, precision or smart cooling also known by other marketing names.

Powering Up, or, Powering Down
Speaking of energy or power, in addition to addressing cooling, there are a couple of ways of addressing power consumption by storage equipment (Figure 1). The most popular discussed approach towards efficiency is energy avoidance involving powering down storage when not used such as first generation MAID at the cost of performance.

For off-line storage, tape and other removable media give low-cost capacity per watt with low to no energy needed when not in use. Second generation (e.g. MAID 2.0) solutions with intelligent power management (IPM) capabilities have become more prevalent enabling performance or energy savings on a more granular or selective basis often as a standard feature in common storage systems.

GreenOptionsBalance
Figure 1:  How various RAID levels and configuration impact or benefit footprint constraints

Another approach to energy efficiency is seen in figure 1 which is doing more work for active applications per watt of energy to boost productivity. This can be done by using same amount of energy however doing more work, or, same amount of work with less energy.

For example instead of using larger capacity disks to improve capacity per watt metrics, active or performance sensitive storage should be looked at on an activity basis such as IOP, transactions, videos, emails or throughput per watt. Hence, a fast disk drive doing work can be more energy-efficient in terms of productivity than a higher capacity slower disk drive for active workloads, where for idle or inactive, the inverse should hold true.

On a go forward basis the trend already being seen with some servers and storage systems is to do both more work, while using less energy. Thus a larger gap between useful work (for active or non idle storage) and amount of energy consumed yields a better efficiency rating, or, take the inverse if that is your preference for smaller numbers.

Reducing Data Footprint Impact
Data footprint impact reduction tools or techniques for both on-line as well as off-line storage include archiving, data management, compression, deduplication, space-saving snapshots, thin provisioning along with different RAID levels among other approaches. From a storage access standpoint, you can also include bandwidth optimization, data replication optimization, protocol optimizers along with other network technologies including WAFS/WAAS/WADM to help improve efficiency of data movement or access.

Thin provisioning for capacity centric environments can be used to achieving a higher effective storage use level by essentially over booking storage similar to how airlines oversell seats on a flight. If you have good historical information and insight into how storage capacity is used and over allocated, thin provisioning enables improved effective storage use to occur for some applications.

However, with thin provisioning, avoid introducing performance bottlenecks by leveraging solutions that work closely with tools that providing historical trending information (capacity and performance).

For a technology that some have tried to declare as being dead to prop other new or emerging solutions, RAID remains relevant given its widespread deployment and transparent reliance in organizations of all size. RAID also plays a role in storage performance, availability, capacity and energy constraints as well as a relief tool.

The trick is to align the applicable RAID configuration to the task at hand meeting specific performance, availability, capacity or energy along with economic requirements. For some environments a one size fits all approach may be used while others may configure storage using different RAID levels along with number of drives in RAID sets to meet specific requirements.


Figure 2:  How various RAID levels and configuration impact or benefit footprint constraints

Figure 2 shows a summary and tradeoffs of various RAID levels. In addition to the RAID levels, how many disks can also have an impact on performance or capacity, such as, by creating a larger RAID 5 or RAID 6 group, the parity overhead can be spread out, however there is a tradeoff. Tradeoffs can be performance bottlenecks on writes or during drive rebuilds along with potential exposure to drive failures.

All of this comes back to a balancing act to align to your specific needs as some will go with a RAID 10 stripe and mirror to avoid risks, even going so far as to do triple mirroring along with replication. On the other hand, some will go with RAID 5 or RAID 6 to meet cost or availability requirements, or, some I have talked with even run RAID 0 for data and applications that need the raw speed, yet can be restored rapidly from some other medium.

Lets bring it all together with an example
Figure 3 shows a generic example of a before and after optimization for a mixed workload environment, granted you can increase or decrease the applicable capacity and performance to meet your specific needs. In figure 3, the storage configuration consists of one storage system setup for high performance (left) and another for high-capacity secondary (right), disk to disk backup and other near-line needs, again, you can scale the approach up or down to your specific need.

For the performance side (left), 192 x 146GB 15K RPM (28TB raw) disks provide good performance, however with low capacity use. This translates into a low capacity per watt value however with reasonable IOPs per watt and some performance hot spots.

On the capacity centric side (right), there are 192 x 1TB disks (192TB raw) with good space utilization, however some performance hot spots or bottlenecks, constrained growth not to mention low IOPS per watt with reasonable capacity per watt. In the before scenario, the joint energy use (both arrays) is about 15 kWh or 15,000 watts which translates to about $16,000 annual energy costs (cooling excluded) assuming energy cost of 12 cents per kWh.

Note, your specific performance, availability, capacity and energy mileage will vary based on particular vendor solution, configuration along with your application characteristics.


Figure 3: Baseline before and after storage optimization (raw hardware) example

Building on the example in figure 3, a combination of techniques along with technologies yields a net performance, capacity and perhaps feature functionality (depends on specific solution) increase. In addition, floor-space, power, cooling and associated footprints are also reduced. For example, the resulting solution shown (middle) comprises 4 x 250GB flash SSD devices, along with 32 x 450GB 15.5K RPM and 124 x 2TB 7200RPM enabling an 53TB (raw) capacity increase along with performance boost.

The previous example are based on raw or baseline capacity metrics meaning that further optimization techniques should yield improved benefits. These examples should also help to discuss the question or myth that it costs more to power storage than to buy it which the answer should be it depends.

If you can buy the above solution for say under $50,000 (cost to power), or, let alone, $100,000 (power and cool) for three years which would also be a good acquisition, then the myth of buying is more expensive than powering holds true. However, if a solution as described above costs more, than the story changes along with other variables include energy costs for your particular location re-enforcing the notion that your mileage will vary.

Another tip is that more is not always better.

That is, more disks, ports, processors, controllers or cache do not always equate into better performance. Performance is the sum of how those and other pieces working together in a demonstrable way, ideally your specific application workload compared to what is on a product data sheet.

Additional general tips include:

  • Align the applicable tool, technique or technology to task at hand
  • Look to optimize for both performance and capacity, active and idle storage
  • Consolidated applications and servers need fast servers
  • Fast servers need fast I/O and storage devices to avoid bottlenecks
  • For active storage use an activity per watt metric such as IOP or transaction per watt
  • For in-active or idle storage, a capacity per watt per footprint metric would apply
  • Gain insight and control of how storage resources are used to meet service requirements

It should go without saying, however sometimes what is understood needs to be restated.

In the quest to become more efficient and optimized, avoid introducing performance, quality of service or availability issues by moving problems.

Likewise, look beyond storage space capacity also considering performance as applicable to become efficient.

Finally, it is all relative in that what might be applicable to one environment or application need may not apply to another.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

I/O Virtualization (IOV) Revisited

Is I/O Virtualization (IOV) a server topic, a network topic, or a storage topic (See previous post)?

Like server virtualization, IOV involves servers, storage, network, operating system, and other infrastructure resource management areas and disciplines. The business and technology value proposition or benefits of converged I/O networks and I/O virtualization are similar to those for server and storage virtualization.

Additional benefits of IOV include:

  • Doing more with what resources (people and technology) already exist or reduce costs
  • Single (or pair for high availability) interconnect for networking and storage I/O
  • Reduction of power, cooling, floor space, and other green efficiency benefits
  • Simplified cabling and reduced complexity for server network and storage interconnects
  • Boosting servers performance to maximize I/O or mezzanine slots
  • reduce I/O and data center bottlenecks
  • Rapid re-deployment to meet changing workload and I/O profiles of virtual servers
  • Scaling I/O capacity to meet high-performance and clustered application needs
  • Leveraging common cabling infrastructure and physical networking facilities

Before going further, lets take a step backwards for a few moments.

To say that I/O and networking demands and requirements are increasing is an understatement. The amount of data being generated, copied, and retained for longer periods of time is elevating the importance of the role of data storage and infrastructure resource management (IRM). Networking and input/output (I/O) connectivity technologies (figure 1) tie facilities, servers, storage tools for measurement and management, and best practices on a local and wide area basis to enable an environmentally and economically friendly data center.

TIERED ACCESS FOR SERVERS AND STORAGE
There is an old saying that the best I/O, whether local or remote, is an I/O that does not have to occur. I/O is an essential activity for computers of all shapes, sizes, and focus to read and write data in and out of memory (including external storage) and to communicate with other computers and networking devices. This includes communicating on a local and wide area basis for access to or over Internet, cloud, XaaS, or managed services providers such as shown in figure 1.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 1 The Big Picture: Data Center I/O and Networking

The challenge of I/O is that some form of connectivity (logical and physical), along with associated software is required along with time delays while waiting for reads and writes to occur. I/O operations that are closest to the CPU or main processor should be the fastest and occur most frequently for access to main memory using internal local CPU to memory interconnects. In other words, fast servers or processors need fast I/O, either in terms of low latency, I/O operations along with bandwidth capabilities.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 2 Tiered I/O and Networking Access

Moving out and away from the main processor, I/O remains fairly fast with distance but is more flexible and cost effective. An example is the PCIe bus and I/O interconnect shown in Figure 2, which is slower than processor-to-memory interconnects but is still able to support attachment of various device adapters with very good performance in a cost effective manner.

Farther from the main CPU or processor, various networking and I/O adapters can attach to PCIe, PCIx, or PCI interconnects for backward compatibility to support various distances, speeds, types of devices, and cost factors.

In general, the faster a processor or server is, the more prone to a performance impact it will be when it has to wait for slower I/O operations.

Consequently, faster servers need better-performing I/O connectivity and networks. Better performing means lower latency, more IOPS, and improved bandwidth to meet application profiles and types of operations.

Peripheral Component Interconnect (PCI)
Having established that computers need to perform some form of I/O to various devices, at the heart of many I/O and networking connectivity solutions is the Peripheral Component Interconnect (PCI) interface. PCI is an industry standard that specifies the chipsets used to communicate between CPUs and memory and the outside world of I/O and networking device peripherals.

Figure 3 shows an example of multiple servers or blades each with dedicated Fibre Channel (FC) and Ethernet adapters (there could be two or more for redundancy). Simply put the more servers and devices to attach to, the more adapters, cabling and complexity particularly for blade servers and dense rack mount systems.
PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 3 Dedicated PCI adapters for I/O and networking devices

Figure 4 shows an example of a PCI implementation including various components such as bridges, adapter slots, and adapter types. PCIe leverages multiple serial unidirectional point to point links, known as lanes, in contrast to traditional PCI, which used a parallel bus design.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

Figure 4 PCI IOV Single Root Configuration Example

In traditional PCI, bus width varied from 32 to 64 bits; in PCIe, the number of lanes combined with PCIe version and signaling rate determine performance. PCIe interfaces can have 1, 2, 4, 8, 16, or 32 lanes for data movement, depending on card or adapter format and form factor. For example, PCI and PCIx performance can be up to 528 MB per second with a 64 bit, 66 MHz signaling rate, and PCIe is capable of over 4 GB (e.g., 32 Gbit) in each direction using 16 lanes for high-end servers.

The importance of PCIe and its predecessors is a shift from multiple vendors’ different proprietary interconnects for attaching peripherals to servers. For the most part, vendors have shifted to supporting PCIe or early generations of PCI in some form, ranging from native internal on laptops and workstations to I/O, networking, and peripheral slots on larger servers.

The most current version of PCI, as defined by the PCI Special Interest Group (PCISIG), is PCI Express (PCIe). Backwards compatibility exists by bridging previous generations, including PCIx and PCI, off a native PCIe bus or, in the past, bridging a PCIe bus to a PCIx native implementation. Beyond speed and bus width differences for the various generations and implementations, PCI adapters also are available in several form factors and applications.

Traditional PCI was generally limited to a main processor or was internal to a single computer, but current generations of PCI Express (PCIe) include support for PCI Special Interest Group (PCI) I/O virtualization (IOV), enabling the PCI bus to be extended to distances of a few feet. Compared to local area networking, storage interconnects, and other I/O connectivity technologies, a few feet is very short distance, but compared to the previous limit of a few inches, extended PCIe provides the ability for improved sharing of I/O and networking interconnects.

I/O VIRTUALIZATION(IOV)
On a traditional physical server, the operating system sees one or more instances of Fibre Channel and Ethernet adapters even if only a single physical adapter, such as an InfiniBand HCA, is installed in a PCI or PCIe slot. In the case of a virtualized server for example, Microsoft HyperV or VMware ESX/vSphere the hypervisor will be able to see and share a single physical adapter, or multiple adapters, for redundancy and performance to guest operating systems. The guest systems see what appears to be a standard SAS, FC or Ethernet adapter or NIC using standard plug-and-play drivers.

Virtual HBA or virtual network interface cards (NICs) and switches are, as their names imply, virtual representations of a physical HBA or NIC, similar to how a virtual machine emulates a physical machine with a virtual server. With a virtual HBA or NIC, physical NIC resources are carved up and allocated as virtual machines, but instead of hosting a guest operating system like Windows, UNIX, or Linux, a SAS or FC HBA, FCoE converged network adapter (CNA) or Ethernet NIC is presented.

In addition to virtual or software-based NICs, adapters, and switches found in server virtualization implementations, virtual LAN (VLAN), virtual SAN (VSAN), and virtual private network (VPN) are tools for providing abstraction and isolation or segmentation of physical resources. Using emulation and abstraction capabilities, various segments or sub networks can be physically connected yet logically isolated for management, performance, and security purposes. Some form of routing or gateway functionality enables various network segments or virtual networks to communicate with each other when appropriate security is met.

PCI-SIG IOV
PCI SIG IOV consists of a PCIe bridge attached to a PCI root complex along with an attachment to a separate PCI enclosure (Figure 5). Other components and facilities include address translation service (ATS), single-root IOV (SR IOV), and multiroot IOV (MR IOV). ATS enables performance to be optimized between an I/O device and a servers I/O memory management. Single root, SR IOV enables multiple guest operating systems to access a single I/O device simultaneously, without having to rely on a hypervisor for a virtual HBA or NIC.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

Figure 5 PCI SIG IOV

The benefit is that physical adapter cards, located in a physically separate enclosure, can be shared within a single physical server without having to incur any potential I/O overhead via virtualization software infrastructure. MR IOV is the next step, enabling a PCIe or SR IOV device to be accessed through a shared PCIe fabric across different physically separated servers and PCIe adapter enclosures. The benefit is increased sharing of physical adapters across multiple servers and operating systems not to mention simplified cabling, reduced complexity and resource utilization.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 6 PCI SIG MR IOV

Figure 6 shows an example of a PCIe switched environment, where two physically separate servers or blade servers attach to an external PCIe enclosure or card cage for attachment to PCIe, PCIx, or PCI devices. Instead of the adapter cards physically plugging into each server, a high performance short-distance cable connects the servers PCI root complex via a PCIe bridge port to a PCIe bridge port in the enclosure device.

In figure 6, either SR IOV or MR IOV can take place, depending on specific PCIe firmware, server hardware, operating system, devices, and associated drivers and management software. For a SR IOV example, each server has access to some number of dedicated adapters in the external card cage, for example, InfiniBand, Fibre Channel, Ethernet, or Fibre Channel over Ethernet (FCoE) and converged networking adapters (CNA) also known as HBAs. SR IOV implementations do not allow different physical servers to share adapter cards. MR IOV builds on SR IOV by enabling multiple physical servers to access and share PCI devices such as HBAs and NICs safely with transparency.

The primary benefit of PCI IOV is to improve utilization of PCI devices, including adapters or mezzanine cards, as well as to enable performance and availability for slot-constrained and physical footprint or form factor-challenged servers. Caveats of PCI IOV are distance limitations and the need for hardware, firmware, operating system, and management software support to enable safe and transparent sharing of PCI devices. Examples of PCIe IOV vendors include Aprius, NextIO and Virtensys among others.

InfiniBand IOV
InfiniBand based IOV solutions are an alternative to Ethernet-based solutions. Essentially, InfiniBand approaches are similar, if not identical, to converged Ethernet approaches including FCoE, with the difference being InfiniBand as the network transport. InfiniBand HCAs with special firmware are installed into servers that then see a Fibre Channel HBA and Ethernet NIC from a single physical adapter. The InfiniBand HCA also attaches to a switch or director that in turn attaches to Fibre Channel SAN or Ethernet LAN networks.

The value of InfiniBand converged networks are that they exist today, and they can be used for consolidation as well as to boost performance and availability. InfiniBand IOV also provides an alternative for those who do not choose to deploy Ethernet.

From a power, cooling, floor-space or footprint standpoint, converged networks can be used for consolidation to reduce the total number of adapters and the associated power and cooling. In addition to removing unneeded adapters without loss of functionality, converged networks also free up or allow a reduction in the amount of cabling, which can improve airflow for cooling, resulting in additional energy efficiency. An example of a vendor using InfiniBand as a platform for I/O virtualization is Xsigo.

General takeaway points include the following:

  • Minimize the impact of I/O delays to applications, servers, storage, and networks
  • Do more with what you have, including improving utilization and performance
  • Consider latency, effective bandwidth, and availability in addition to cost
  • Apply the appropriate type and tiered I/O and networking to the task at hand
  • I/O operations and connectivity are being virtualized to simplify management
  • Convergence of networking transports and protocols continues to evolve
  • PCIe IOV is complimentary to converged networking including FCoE

Moving forward, a revolutionary new technology may emerge that finally eliminates the need for I/O operations. However until that time, or at least for the foreseeable future, several things can be done to minimize the impacts of I/O for local and remote networking as well as to simplify connectivity.

Learn more about IOV, converged networks, LAN, SAN, MAN and WAN related topics in Chapter 9 (Networking with your servers and storage) of The Green and Virtual Data Center (CRC) as well as in Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Storage Efficiency and Optimization – The Other Green

For those of you in the New York City area, I will be presenting live in person at Storage Decisions September 23, 2009 conference The Other Green, Storage Efficiency and Optimization.

Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives.

To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

Topics, technologies and techniques that will be discussed include among others:

  • Energy efficiency (strategic) vs. energy avoidance (tactical), whats different between them
  • Optimization and the need for speed vs. the need for capacity, finding the right balance
  • Metrics & measurements for management insight, what the industry is doing (or not doing)
  • Tiered storage and tiered access including SSD, FC, SAS, tape, clouds and more
  • Data footprint reduction (archive, compress, dedupe) and thin provision among others
  • Best practices, financial incentives and what you can do today

This is a free event for IT professionals, however space I hear is limited, learn more and register here.

For those interested in broader IT data center and infrastructure optimization, check out the on-going seminar series The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. This is also a free Seminar, register and learn more here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

SPC and Storage Benchmarking Games

Storage I/O trends

There is a post over in one of the LinkedIn Discussion forums about storage performance council (SPC) benchmarks being miss-leading that I just did a short response post to. Here’s the full post as LinkedIn has a short post response limit.

While the SPC is far from perfect, it is at least for block, arguably better than doing nothing.

For the most part, SPC has become a de facto standard for at least block storage benchmarks independent of using IOmeter or other tools or vendor specific simulations, similar how MSFT ESRP is for exchange, TPC for database, SPEC for NFS and so forth. In fact, SPC even recently rather quietly rolled out a new set of what could be considered the basis for Green storage benchmarks. I would argue that SPC results in themselves are not misleading, particularly if you take the time to look at both the executive and full disclosures and look beyond the summary.

Some vendors have taken advantage of the SPC results playing games with discounting on prices (something that’s allowed under SPC rules) to show and make apples to oranges comparisons on cost per IOP or other ploys. This proactive is nothing new to the IT industry or other industries for that matter, hence benchmark games.

Where the misleading SPC issue can come into play is for those who simply look at what a vendor is claiming and not looking at the rest of the story, or taking the time to look at the results and making apples to apples, instead of believing the apples to oranges comparison. After all, the results are there for a reason. That reason is for those really interested to dig in and sift through the material, granted not everyone wants to do that.

For example, some vendors can show a highly discounted list price to get a better IOP per cost on an apple to oranges basis, however, when processes are normalized, the results can be quite different. However here’s the real gem for those who dig into the SPC results, including looking at the configurations and that is that latency under workload is also reported.

The reason that latency is a gem is that generally speaking, latency does not lie.

What this means is that if vendor A doubles the amount of cache, doubles the number of controllers, doubles the number of disk drives, plays games with actual storage utilization (ASU), utilizes fast interfaces from 10 GbE  iSCSI to 8Gb FC or FCoE or SAS to get a better cost per IOP number with discounting, look at the latency numbers. There have been some recent examples of this where vendor A has a better cost per IOP while achieving a higher number of IOPS at a lower cost compared to vendor B, which is what is typically reported in a press release or news story. (See a blog entry that also points to a CMG presentation discussion around this topic here.

Then go and look at the two results, vendor B may be at list price while vendor A is severely discounted which is not a bad thing, as that is then the starting list price as to which customers should start negotiations. However to be fair, normalize the pricing for fun, look at how much more equipment vendor A may need while having to discount to get the price to offset the increased amount of hardware, then look at latency.

In some of the recent record reported results, the latency results are actually better for a vendor B than for a vendor A and why does latency matter? Beyond showing what a controller can actually do in terms of levering  the number of disks, cache, interface ports and so forth, the big kicker is for those talking about SSD (RAM or FLASH) in that SSD generally is about latency. To fully effectively utilize SSD which is a low latency device, you would want a controller that can do a decent job at handling IOPS; however you also need a controller that can do a decent job of handling IOPS with low latency under heavy workload conditions.

Thus the SPC again while far from perfect, at least for a thumb nail sketch and comparison is not necessarily misleading, more often than not it’s how the results are utilized that is misleading. Now in the quest for the SPC administrators to try and gain more members and broader industry participation and thus secure their own future, is the SPC organization or administration opening itself up to being used more and more as a marketing tool in ways that potentially compromise all the credibility (I know, some will dispute the validity of SPC, however that’s reserved for a different discussion ;) )?

There is a bit of Déjà here for those involved with RAID and storage who recall how the RAID Advisory Board (RAB) in its quest to gain broader industry adoption and support succumbed to marketing pressures and use or what some would describe as miss-use and is now a member of the “Where are they now” club!

Don’t get me wrong here; I like the SPC tests/results/format, there is a lot of good information in the SPC. The various vendor folks who work very hard behind the scenes to make the SPC actually work and continue to evolve it also all deserve a great big kudos, an “atta boy” or “atta girl” for the fine work that have been doing, work that I hope does not become lost in the quest to gain market adoption for the SPC.

Ok, so then this should all then beg the question of what is the best benchmark. Simple, the one that most closely resembles your actual applications, workload, conditions, configuration and environment.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Storage Decisions Spring 2009 Sessions Update

StorageDecisions Logo

The conference lineup and details for the Spring 2009 Storage Decisions event (June 1st and 2nd) in Chicago is coming together including two talks/presentations that I will be doing. One will be in Track 2 (Disaster Recovery) titled "Server Virtualization, Business Continuance and Disaster Recovery" and the other in Track 6 (Management/Executive) titled "The Other Green — Storage Efficiency and Optimization" with both sessions leveraging themes and topics from my new book "The Green and Virtual Data Center" (CRC).

Track 2: Disaster Recovery
Server Virtualization, Business Continuance and Disaster Recovery
Presented by Greg Schulz, Founder and Senior Analyst, StorageIO
Server virtualization has the potential to bring sophisticated business continuance (BC) and disaster recovery (DR) techniques to organizations that previously didn’t have the means to adopt them. Likewise, virtualized as well as cloud environments need to be included in a BC/DR plan to enable application and data availability. Learn tips and tricks on building an accessible BC/DR strategy and plan using server virtualization and the storage products that enable efficient, flexible green and virtual data centers.

Topics include:
* Cross technology domain data protection management
* Tiered data protection to stretch your IT budget dollar
* What’s needed to enable BC/DR for virtualized environments
* How virtualization can enable BC/DR for non-virtualized environments
* General HA, BC/DR and data protection tips for virtual environments

Track 6: Management/Executive
The Other Green — Storage Efficiency and Optimization
Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives. To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

Topics include:
* Energy efficiency (strategic) vs. energy avoidance (tactical)
* Optimization and the need for speed vs. the need for capacity
* Metrics and measurements for management insight
* Tiered storage and tiered access including SSD, FC, SAS and clouds
* Data footprint reduction (archive, compress, dedupe) and thin provision
* Best practices, financial incentives and what you can do today

See you in Chicago in June if not before then. Learn more about other upcoming events and activities on the StorageIO events page.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved