NetApp buying LSIs Engenio Storage Business Unit

Storage I/O trends

This has been a busy week as on Monday Western Digital (WD) announced that they were buying the disk drive business from Hitachi Ltd. (e.g. HGST) for about $4.3 billion USD. The deal includes about $3.5B in cash and 25 million WD common shares (e.g. $750M USD) which will give Hitachi Ltd. about ten (10) percent ownership in WD along with adding two Hitachi persons onto the WD board of directors. WD now moves into the number one hard disk drive (HDD) spot above Seagate (note Hitachi is not selling HDS) in addition to giving them a competitive positioning in both the enterprise HDD as well as emerging SSD markets.

Today NetApp announced that they have agreed to purchase portions of the LSI storage business known as Engenio for $480M USD.

The business and technology that LSI is selling to NetApp (aka Engenio) is the external storage system business that accounted for about $705M of their approximate $900M+ storage business in 2010. This piece of the business represents external (outside of the server) shared RAID storage systems that support Serial Attached SCSI (SAS), iSCSI, Fibre Channel (FC) and emerging FCoE (Fibre Channel over Ethernet) with SSD, SAS and FC high performance HDDs as well as high capacity HDDs. NetApp has block however there strong suit (sorry netapp guys) is file while Engenio strong suit is block that attaches to gateways from NetApp as well as others in addition to servers for scale out NAS and cloud.

What NetApp is getting from LSI is the business that sells storage systems or their components to OEMs including Dell, IBM (here and here), Oracle, SGI and TeraData (a former NCR spin off) among others.

What LSI is retaining are their custom storage silicon, ICs, PCI RAID adapter and host bus adapter (HBA) cards including MegaRAID, 3ware along with SAS chips, SAS switches, PCI SSD card and the Onstor NAS product they acquired about a year ago. Other parts of the LSI business which makes chips for storage, networking and communications vendors is also not affected by this deal.

In other words, the sign in front of the Wichita LSI facility that used to say NCR will now probably include a NetApp logo once the deal closes.

For those not familiar, Tom Georgens current CEO of NetApp is very familiar with Engenio and LSI as he used to work there (after leaving a career at EMC). In fact Mr. Georgens was part of the most recent attempt to spin the external storage business out of LSI back in the mid 2000s when it received the Engenio name and branding. In addition to Tom Georgens, Vic Mahadevan the current NetApp Chief Strategy Officer recently worked at LSI and before that at BMC, Compaq and Maxxan among others.

What do I mean by the most recent attempt to spin the storage business out of LSI? Simple, the Engenio storage business traces its lineage back to NCR and what become known as Symbiosis Logic that LSI acquired as part of some other acquisitions.

Going back to the late 90s, there was word on the street that the then LSI management was not sure what to do with storage business as their core business was and still is making high volume chips and related technologies. Current LSI CEO Abhi Talwalkar is a chip guy (nothing wrong with that) who honed his skills at Intel. Thus it should not be a surprise that there is a focus on the LSI core business model of making their own as well as producing silicon (not the implant stuff) for IT and consumer electronics (read their annual report).

As part of the acquisition, LSI has already indicated that they will use all or some of the cash to buy back their stock. However I also wonder if this does not open the door for Abhi and his team to do some other acquisitions more synergic with their core business.

What does NetApp get:

  • Expanded OEM and channel distribution capabilities
  • Block based products to coexist with their NAS gateways
  • Business with an established revenue base
  • Footprint into new or different markets
  • Opportunity to sell different product set to existing customers

NetApp gets an OEM channel distribution model to complement what they already have (mainly IBM) in addition to their mainly direct sales and with VARs. Note that Engenio went to an all OEM/distribution model several years ago maintaining direct touch support for their partners.

Note that NetApp is providing financial guidance that the deal could add $750M to FY12 which is based on retaining some portion of the existing OEM business however moving into new markets as well as increasing product diversity with existing direct customers, vars or channel partners.

NetApp also gets to address storage market fragmentation and enable OEM as well as channel diversification including selling to other server vendors besides IBM. The Engenio model in addition to supporting Dell, IBM, Oracle, SGI and other server vendors also involves working with vertical solution integrator OEMs in the video, entertainment, High Performance Compute (HPC), cloud and MSP markets. This means that NetApp can enter new markets where bandwidth performance is needed including scale out NAS (beyond what NetApp has been doing). This also means that NetApp gets a product to sell into markets where back end storage for big data, bulk storage, media and entertainment, cloud and MSP as well as other applications leverage SAS, iSCSI or FC and FCoE beyond what their current lineup offers. Who sells into those spaces? Dell, HP, IBM, Oracle, SGI and Supermicro among others.

What does LSI get:

  • $480M USD cash and buy back some stock to keep investors happy
  • Streamline their business or open door for new ones
  • Perhaps increase OEM sales to other new or existing customers
  • Perhaps do some acquisitions or be acquired

What does Engenio get:
A new parent that hopefully invest in the technology and marketing of the solution sets as well as leverage or take care of the installed base of customers

What do the combined Engenio and NetApp OEMs and partners get:
With combination of the organizations, hopefully streamlined support, service, and marketing, product enhancements to address new or different needs. Possibly comfort in knowing that Engenio now has a home and its future somewhat known.

What about the Engenio employees?
The reason I bring this up is wondering what happens to those who have many years invested and their LSI stock which I presume they keep hoping that the sale gives them a future return on their investment or efforts. Having been in similar acquisitions in the past, it can be a rough go however if the acquirer has a bright future, than enough said.

Some random thoughts:

Is this one of those industry trendy, sexy, cool everybody drooling type deals with new and upcoming technology and marketing buzz?
No

Is this one of those industry deals that has good upside potential if executed upon and leveraged?
Yes

Netapp already has a storage offering why do they need Engenio?
No offense to NetApp, however they have needed a robust block storage offering to complement their NAS file serving and extensive software functionality to move into to different markets. This is not all that different from what EMC needed to do in the late 90s extending their capabilities from their sole cash cow platform Symmetrix to acquire DG to have a mid range offering.

NetApp is risking $480M on a business with technologies that some see or say is on the decline, so why would they do such a thing?
Ok, lets set the technology topics aside, from a pure numbers perspective, lets take two scenarios and Im not a financial person so go easy on me please. What some financial people have told me with other deals is that its sometimes about getting a return on cash vs. it not doing anything. So with that and other things in mind, say NetApp just lets $480M sit in the bank, can they get 12 per cent or better interest? Probably not and if they can, I want the name of that bank. What that means is that for a five year period, if they could get that rate of return (12 percent), they would only make $824M-480M=$344M on the investment (I know, there are tax and other financial considerations however lets keep simple). Now lets take another scenario, assume that NetApp simply rides a decline of the business at say a 20 percent per year rate (how many business are growing or in storage declining at 20 percent per year?) for five years. That works out to about a $1.4B yield. Lets take a different scenario and assume that NetApp can simply maintain an annual run rate of $700-750M for that five years, that works out to around $3.66B-480M=$3.1B revenue or return on investment. In other words, even with some decline, over a five year period, the OEM business pays for the deal alone and perhaps helps funds investment in technology improvement with the business balance being positive upside.

Now both of those are extreme scenarios so lets take something more likely such as NetApp being able to simply maintain a 700-750M run rate by keeping some of the OEM business, finding new markets for challenge and OEM as well as direct, expanding footprint into their markets. Now that math gets even more interesting. Having said all of that, NetApp needs to keep investing in the business and products to get those returns which might help explain the relative low price to run rate.

Is this a good deal for NetApp?
IMHO yes, as long as NetApp does not screw it up. If NetApp can manage the business, invest in it, grow into new markets instead of simple cannibalization, they will have made a good deal similar to what EMC did with DG back in the late 90s. However NetApp needs to execute, leverage what they are buying, invest in it and pick up new business to make up for the declining business with some of the OEMs.

With several hundred thousand systems or controllers having been sold over the years (granted how many are actually running is your guess as good as mine), NetApp has a footprint to leverage with their other products. For example, should IBM, Dell or Oracle completely walk away from those installed footprints, NetApp can move in with firmware or other upgrades to support plus up sell with their NAS gateways to add value with compression, dedupe, etc.

What about NetApps acquisition track record?
Fair question although Im sure the NetApp faithful wont like it. NetApp has had their ups and downs with acquisitions (Topio, Decru, Spinaker, Onaro, etc), perhaps with this one like EMC in the late 90s who bought DG to overcome some rough up and down acquisitions can also get their mojo on. (See this post).While we are on the topic of acquisitions, NetApp recently bought Akorri and last year Bycast which they now call StorageGrid that has been OEMd in the past by IBM. Guess what storage was commonly used under the IBM servers running the Bycast software? If you guessed XIV you might want to take a mulligan or a do over. Btw, HP also has OEMd the Bycast software. If you are not familiar with Bycast and interested in automated movement, tiering, policy management, objects and other buzzwords, ping your favorite NetApp person as it is a diamond in the rough if leveraged beyond healthcare capabilities.

What does this mean for Xyratex and Dothill who are NetApp partners?
My guess is that for now, the general purpose enclosures would stay the same (e.g. Xyratex) until there is a business case to do something different. For the high density enclosures, that could be a different scenario. As for others, we will have to wait and see.

Will NetApp port OnTap into Engenio?
The easiest and fastest thing is to do what NetApp and Engenio OEM customers have already been doing, that is, place the Engenio arrays behind the NetApp fas vfiler. Note that Engenio has storage systems that speak SAS to HDDs and SSDs as well as able to speak SAS, iSCSI and FC to hosts or gateways. NetApp has also embraced SAS for back end storage, maybe we will see them leverage a SAS connection out of their filers in the future to SAS storage systems or shelves instead of FC loop?

Speaking of SAS host or server attached storage, guess what many cloud, MSP, high performance and other environment are using for storage on the back end of their clusters or scale out NAS systems?
Yup, SAS.

Guess what gap NetApp gets to fill joining Dell, HP, IBM and Oracle who can now give a choice of SAS, iSCSI or FC in addition or NAS?
Yup, SAS.

Care to guess what storage vendor we can expect to hear downplay SAS as a storage system to server or gateway technology?
Hmm

Is this all about SAS?
No

Will this move scare EMC?
No, EMC does not get scared, or at least that is what they tell me.

Will LSI buy Fusion IO who has or is filing their documents to IPO or someone else?
Your guess or speculation is better than mine. However LSI already has and is retaining their own PCIe SSD card.

Why only $480M for a business that did $705M in 2010?
Good question. There is risk in that if NetApp does not invest in the product, marketing, relationships that they will not see the previous annual run rate so it is not a straight annuity. Consequently NetApp is taking risk with the business and thus they should get the reward if they can run with it. Another reason is that there probably were not any investment bankers or brokers running up the price.

Why didnt Dell buy Engenio for $480M?
Good question, if they had the chance, they should have however it probably would not have been a good fit as Dell needs direct sales vs. OEM sales.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

From bits to bytes: Decoding Encoding

With networking, care should be taken to understand if a given speed or performance capacity is being specified in bits or bytes as well as in base 2 (binary) or base 10 (decimal).

Another consideration and potential point of confusion are line rates (GBaud) and link speed which can vary based on encoding and low level frame or packet size. For example 1GbE along with 1, 2, 4 and 8Gb Fibre Channel along with Serial Attached SCSI (SAS) use an 8b/10b encoding scheme. This means that at the lowest physical layer 8bits of data are placed into 10bits for transmission with 2 bits being for data integrity.

With an 8Gb link using 8b/10b encoding, 2 out of every 10 bits are overhead. To determine the actual data throughput for bandwidth, or, number of IOPS, frames or packets per second is a function of the link speed, encoding and baud rate. For example, 1Gb FC has a 1.0625 Gb per second speed which is multiple by the current generation so 8Gb FC or 8GFC would be 8 x 1.0625 = 8.5Gb per second.

Remember to factor in that encoding overhead (e.g. 8 of 10 bits are for data with 8b/10b) and usable bandwidth on the 8GFC link is about 6.8Gb per second or about 850Mbytes (6.8Gb / 8 bits) per second. 10GbE uses 64b/66b encoding which means that for every 64 bits of data, only 2 bits are used for data integrity checks thus less overhead.

What do all of this bits and bytes have to do with clouds and virtual data storage network?

Quite a bit when you consider what we have talked about the need to support more information processing, moving as well as storing in a denser footprint.

In order to support higher densities faster servers, storage and networks are not enough and require various approaches to reducing the data footprint impact.

What this means is for fast networks to be effective they also have to have lower overhead to avoid moving more extra data in the same amount of time instead using that capacity for productive work and data.

PCIe leverages multiple serial unidirectional point to point links, known as lanes, compared to traditional PCI that used a parallel bus based design. With traditional PCI, the bus width varied from 32 to 64 bits while with PCIe, the number of lanes combined with PCIe version and signaling rate determines performance. PCIe interfaces can have one, two, four, eight, sixteen or thirty two lanes for data movement depending on card or adapter format and form factor.  For example, PCI and PCIx performance can be up to 528 MByte per second with 64 bit 66 MHz signaling rate.

 

PCIe Gen 1

PCIe Gen 2

PCIe Gen 3

Giga transfers per second

2.5

5

8

Encoding scheme

8b/10b

8b/10b

128b/130b

Data rate per lane per second

250MB

500MB

1GB

x32 lanes

8GB

16GBs

32GB

Table 1: PCIe generation comparisons

Table 1 shows performance characteristics of PCIe various generations. With PCIe Gen 3, the effective performance essentially doubles however the actual underlying transfer speed does not double as it has in the past. Instead the improved performance is a combination of about 60 percent link speed and 40 percent efficiency improvements by switching from an 8b/10b to 128b/130b encoding scheme among other optimizations.

Serial interface

Encoding

PCIe Gen 1

8b/10b

PCIe Gen 2

8b/10b

PCIe Gen 3

128b/120b

Ethernet 1Gb

8b/10b

Ethernet 10Gb

64b/66b

Fibre Channel 1/2/4/8 Gb

8b/10b

SAS 6Gb

8b/10b

Table 2: Common encoding

Bringing this all together is that in order to support cloud and virtual computing environments, data networks need to become faster as well as more efficient otherwise you will be paying for more overhead per second vs. productive work being done. For example, with 64b/66b encoding on a 10GbE or FCoE link, 96.96% of the overall bandwidth or about 9.7Gb per second are available for useful work.

By comparison if an 8b/10b encoding were used, the result would be only 80% of available bandwidth for useful data movement. For environments or applications this means better throughput or bandwidth while for applications that require lower response time or latency it means more IOPS, frames or packets per second.

The above is an example of where a small change such as the encoding scheme can have large benefit when applied to high volume or large environments.

Learn more in The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC) at https://storageio.com/books

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Is the new HDS VSP really the MVSP?

Today HDS announced with much fan fare that must have been a million dollar launch budget the VSP (successor to the previous USPV and USPVM).

Im also thinking that the HDS VSP (not to be confused with HP SVSP that HP OEMs via LSI) could also be called the the HDS MVSP.

Now if you are part of the HDS SAN, LAN, MAN, WAN or FAN bandwagon, MVSP could mean Most Valuable Storage Platform or Most Virtualized Storage Product. MVSP might be also called More Virtualized Storage Products by others.

Yet OTOH, MVSP could be More Virtual Story Points (e.g. talking points) for HDS building upon and when comparing to their previous products.

For example among others:

More cache to drive cash movement (e.g. cash velocity or revenue)
More claims and counter claims of industry unique or fists
More cloud material or discussion topics
More cross points
More data mobility
More density
More FUD and MUD throwing by competitors
More functionality
More packets of information to move, manage and store
More pages in the media
More partitioning of resources
More partners to sell thorough or too
More PBytes
More performance and bandwidths
More platforms virtualized
More platters
More points of resiliency
More ports to connect to or through
More posts from bloggers
More power management, Eco and Green talking points
More press releases
More processors
More products to sell
More profits to be made
More protocols (Fibre Channel, FICON, FCoE, NAS) supported
More pundits praises
More SAS, SATA and SSD (flash drives) devices supported
More scale up, scale out, and scale within
More security
More single (Virtual and Physical) pane of glass managements
More software to sell and be licensed by customers
More use of virtualization, 3D and other TLAs
More videos to watch or be stored

Im sure more points can be thought of, however that is a good start for now including some to have a bit of fun with.

Read more about HDS new announcement here, here, here and here:

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Poll: Networking Convergence, Ethernet, InfiniBand or both?

I just received an email in my inbox from Voltaire along with a pile of other advertisements, advisories, alerts and announcements from other folks.

What caught my eye on the email was that it is announcing a new survey results that you can read here as well as below.

The question that this survey announcements prompts for me and hence why I am posting it here is how dominant will InfiniBand be on a go forward basis, the answer I think is it depends…

It depends on the target market or audience, what their applications and technology preferences are along with other service requirements.

I think that there is and will remain a place for Infiniband, the question is where and for what types of environments as well as why have both InfiniBand and Ethernet including Fibre Channel over Ethernet (FCoE) in support of unified or converged I/O and data networking.

So here is the note that I received from Voltaire:

 

Hello,

A new survey by Voltaire (NASDAQ: VOLT) reveals that IT executives plan to use InfiniBand and Ethernet technologies together as they refresh or build new data centers. They’re choosing a converged network strategy to improve fabric performance which in turn furthers their infrastructure consolidation and efficiency objectives.

The full press release is below.  Please contact me if you would like to speak with a Voltaire executive for further commentary.

Regards,
Christy

____________________________________________________________
Christy Lynch| 978.439.5407(o) |617.794.1362(m)
Director, Corporate Communications
Voltaire – The Leader in Scale-Out Data Center Fabrics
christyl@voltaire.com | www.voltaire.com
Follow us on Twitter: www.twitter.com/voltaireltd

FOR IMMEDIATE RELEASE:

IT Survey Finds Executives Planning Converged Network Strategy:
Using Both InfiniBand and Ethernet

Fabric Performance Key to Making Data Centers Operate More Efficiently

CHELMSFORD, Mass. and ANANA, Israel January 12, 2010 – A new survey by Voltaire (NASDAQ: VOLT) reveals that IT executives plan to use InfiniBand and Ethernet technologies together as they refresh or build new data centers. They’re choosing a converged network strategy to improve fabric performance which in turn furthers their infrastructure consolidation and efficiency objectives.

Voltaire queried more than 120 members of the Global CIO & Executive IT Group, which includes CIOs, senior IT executives, and others in the field that attended the 2009 MIT Sloan CIO Symposium. The survey explored their data center networking needs, their choice of interconnect technologies (fabrics) for the enterprise, and criteria for making technology purchasing decisions.

“Increasingly, InfiniBand and Ethernet share the ability to address key networking requirements of virtualized, scale-out data centers, such as performance, efficiency, and scalability,” noted Asaf Somekh, vice president of marketing, Voltaire. “By adopting a converged network strategy, IT executives can build on their pre-existing investments, and leverage the best of both technologies.”

When asked about their fabric choices, 45 percent of the respondents said they planned to implement both InfiniBand with Ethernet as they made future data center enhancements. Another 54 percent intended to rely on Ethernet alone.

Among additional survey results:

  • When asked to rank the most important characteristics for their data center fabric, the largest number (31 percent) cited high bandwidth. Twenty-two percent cited low latency, and 17 percent said scalability.
  • When asked about their top data center networking priorities for the next two years, 34 percent again cited performance. Twenty-seven percent mentioned reducing costs, and 16 percent cited improving service levels.
  • A majority (nearly 60 percent) favored a fabric/network that is supported or backed by a global server manufacturer.

InfiniBand and Ethernet interconnect technologies are widely used in today’s data centers to speed up and make the most of computing applications, and to enable faster sharing of data among storage and server networks. Voltaire’s server and storage fabric switches leverage both technologies for optimum efficiency. The company provides InfiniBand products used in supercomputers, high-performance computing, and enterprise environments, as well as its Ethernet products to help a broad array of enterprise data centers meet their performance requirements and consolidation plans.

About Voltaire
Voltaire (NASDAQ: VOLT) is a leading provider of scale-out computing fabrics for data centers, high performance computing and cloud environments. Voltaire’s family of server and storage fabric switches and advanced management software improve performance of mission-critical applications, increase efficiency and reduce costs through infrastructure consolidation and lower power consumption. Used by more than 30 percent of the Fortune 100 and other premier organizations across many industries, including many of the TOP500 supercomputers, Voltaire products are included in server and blade offerings from Bull, HP, IBM, NEC and Sun. Founded in 1997, Voltaire is headquartered in Ra’anana, Israel and Chelmsford, Massachusetts. More information is available at www.voltaire.com or by calling 1-800-865-8247.

Forward Looking Statements
Information provided in this press release may contain statements relating to current expectations, estimates, forecasts and projections about future events that are "forward-looking statements" as defined in the Private Securities Litigation Reform Act of 1995. These forward-looking statements generally relate to Voltaire’s plans, objectives and expectations for future operations and are based upon management’s current estimates and projections of future results or trends. They also include third-party projections regarding expected industry growth rates. Actual future results may differ materially from those projected as a result of certain risks and uncertainties. These factors include, but are not limited to, those discussed under the heading "Risk Factors" in Voltaire’s annual report on Form 20-F for the year ended December 31, 2008. These forward-looking statements are made only as of the date hereof, and we undertake no obligation to update or revise the forward-looking statements, whether as a result of new information, future events or otherwise.

###

All product and company names mentioned herein may be the trademarks of their respective owners.

 

End of Voltaire transmission:

I/O, storage and networking interface wars come and go similar to other technology debates of what is the best or that will be supreme.

Some recent debates have been around Fibre Channel vs. iSCSI or iSCSI vs. Fibre Channel (depends on your perspective), SAN vs. NAS, NAS vs. SAS, SAS vs. iSCSI or Fibre Channel, Fibre Channel vs. Fibre Channel over Ethernet (FCoE) vs. iSCSI vs. InfiniBand, xWDM vs. SONET or MPLS, IP vs UDP or other IP based services, not to mention the whole LAN, SAN, MAN, WAN POTS and PAN speed games of 1G, 2G, 4G, 8G, 10G, 40G or 100G. Of course there are also the I/O virtualization (IOV) discussions including PCIe Single Root (SR) and Multi Root (MR) for attachment of SAS/SATA, Ethernet, Fibre Channel or other adapters vs. other approaches.

Thus when I routinely get asked about what is the best, my answer usually is a qualified it depends based on what you are doing, trying to accomplish, your environment, preferences among others. In other words, Im not hung up or tied to anyone particular networking transport, protocol, network or interface, rather, the ones that work and are most applicable to the task at hand

Now getting back to Voltaire and InfiniBand which I think has a future for some environments, however I dont see it being the be all end all it was once promoted to be. And outside of the InfiniBand faithful (there are also iSCSI, SAS, Fibre Channel, FCoE, CEE and DCE among other devotees), I suspect that the results would be mixed.

I suspect that the Voltaire survey reflects that as well as if I surveyed an Ethernet dominate environment I can take a pretty good guess at the results, likewise for a Fibre Channel, or FCoE influenced environment. Not to mention the composition of the environment, focus and business or applications being supported. One would also expect a slightly different survey results from the likes of Aprius, Broadcom, Brocade, Cisco, Emulex, Mellanox (they also are involved with InfiniBand), NextIO, Qlogic (they actually do some Infiniband activity as well), Virtensys or Xsigo (actually, they support convergence of Fibre Channel and Ethernet via Infiniband) among others.

Ok, so what is your take?

Whats your preffered network interface for convergence?

For additional reading, here are some related links:

  • I/O Virtualization (IOV) Revisited
  • I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
  • Buzzword Bingo 1.0 – Are you ready for fall product announcements?
  • StorageIO in the News Update V2010.1
  • The Green and Virtual Data Center (Chapter 9)
  • Also check out what others including Scott Lowe have to say about IOV here or, Stuart Miniman about FCoE here, or of Greg Ferro here.
  • Oh, and for what its worth for those concerned about FTC disclosure, Voltaire is not nor have they been a client of StorageIO, however, I did used to work for a Fibre Channel, iSCSI, IP storage, LAN, SAN, MAN, WAN vendor and wrote a book on the topics :).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Optimize Data Storage for Performance and Capacity Efficiency

    This post builds on a recent article I did that can be read here.

    Even with tough economic times, there is no such thing as a data recession! Thus the importance of optimizing data storage efficiency addressing both performance and capacity without impacting availability in a cost effective way to do more with what you have.

    What this means is that even though budgets are tight or have been cut resulting in reduced spending, overall net storage capacity is up year over year by double digits if not higher in some environments.

    Consequently, there is continued focus on stretching available IT and storage related resources or footprints further while eliminating barriers or constraints. IT footprint constraints can be physical in a cabinet or rack as well as floorspace, power or cooling thresholds and budget among others.

    Constraints can be due to lack of performance (bandwidth, IOPS or transactions), poor response time or lack of availability for some environments. Yet for other environments, constraints can be lack of capacity, limited primary or standby power or cooling constraints. Other constraints include budget, staffing or lack of infrastructure resource management (IRM) tools and time for routine tasks.

    Look before you leap
    Before jumping into an optimization effort, gain insight if you do not already have it as to where the bottlenecks exist, along with the cause and effect of moving or reconfiguring storage resources. For example, boosting capacity use to more fully use storage resources can result in a performance issue or data center bottlenecks for other environments.

    An alternative scenario is that in the quest to boost performance, storage is seen as being under-utilized, yet when capacity use is increased, low and behold, response time deteriorates. The result can be a vicious cycle hence the need to address the issue as opposed to moving problems by using tools to gain insight on resource usage, both space and activity or performance.

    Gaining insight means looking at capacity use along with performance and availability activity and how they use power, cooling and floor-space. Consequently an important tool is to gain insight and knowledge of how your resources are being used to deliver various levels of service.

    Tools include storage or system resource management (SRM) tools that report on storage space capacity usage, performance and availability with some tools now adding energy usage metrics along with storage or system resource analysis (SRA) tools.

    Cooling Off
    Power and cooling are commonly talked about as constraints, either from a cost standpoint, or availability of primary or secondary (e.g. standby) energy and cooling capacity to support growth. Electricity is essential for powering IT equipment including storage enabling devices to do their specific tasks of storing data, moving data, processing data or a combination of these attributes.

    Thus, power gets consumed, some work or effort to move and store data takes place and the by product is heat that needs to be removed. In a typical IT data center, cooling on average can account for about 50% of energy used with some sites using less.

    With cooling being a large consumer of electricity, a small percentage change to how cooling consumes energy can yield large results. Addressing cooling energy consumption can be to discuss budget or cost issues, or to enable cooling capacity to be freed up to support installation of extra storage or other IT equipment.

    Keep in mind that effective cooling relies on removing heat from as close to the source as possible to avoid over cooling which requires more energy. If you have not done so, have a facilities review or assessment performed that can range from a quick walk around, to a more in-depth review and thermal airflow analysis. A means of removing heat close to the sort are techniques such as intelligent, precision or smart cooling also known by other marketing names.

    Powering Up, or, Powering Down
    Speaking of energy or power, in addition to addressing cooling, there are a couple of ways of addressing power consumption by storage equipment (Figure 1). The most popular discussed approach towards efficiency is energy avoidance involving powering down storage when not used such as first generation MAID at the cost of performance.

    For off-line storage, tape and other removable media give low-cost capacity per watt with low to no energy needed when not in use. Second generation (e.g. MAID 2.0) solutions with intelligent power management (IPM) capabilities have become more prevalent enabling performance or energy savings on a more granular or selective basis often as a standard feature in common storage systems.

    GreenOptionsBalance
    Figure 1:  How various RAID levels and configuration impact or benefit footprint constraints

    Another approach to energy efficiency is seen in figure 1 which is doing more work for active applications per watt of energy to boost productivity. This can be done by using same amount of energy however doing more work, or, same amount of work with less energy.

    For example instead of using larger capacity disks to improve capacity per watt metrics, active or performance sensitive storage should be looked at on an activity basis such as IOP, transactions, videos, emails or throughput per watt. Hence, a fast disk drive doing work can be more energy-efficient in terms of productivity than a higher capacity slower disk drive for active workloads, where for idle or inactive, the inverse should hold true.

    On a go forward basis the trend already being seen with some servers and storage systems is to do both more work, while using less energy. Thus a larger gap between useful work (for active or non idle storage) and amount of energy consumed yields a better efficiency rating, or, take the inverse if that is your preference for smaller numbers.

    Reducing Data Footprint Impact
    Data footprint impact reduction tools or techniques for both on-line as well as off-line storage include archiving, data management, compression, deduplication, space-saving snapshots, thin provisioning along with different RAID levels among other approaches. From a storage access standpoint, you can also include bandwidth optimization, data replication optimization, protocol optimizers along with other network technologies including WAFS/WAAS/WADM to help improve efficiency of data movement or access.

    Thin provisioning for capacity centric environments can be used to achieving a higher effective storage use level by essentially over booking storage similar to how airlines oversell seats on a flight. If you have good historical information and insight into how storage capacity is used and over allocated, thin provisioning enables improved effective storage use to occur for some applications.

    However, with thin provisioning, avoid introducing performance bottlenecks by leveraging solutions that work closely with tools that providing historical trending information (capacity and performance).

    For a technology that some have tried to declare as being dead to prop other new or emerging solutions, RAID remains relevant given its widespread deployment and transparent reliance in organizations of all size. RAID also plays a role in storage performance, availability, capacity and energy constraints as well as a relief tool.

    The trick is to align the applicable RAID configuration to the task at hand meeting specific performance, availability, capacity or energy along with economic requirements. For some environments a one size fits all approach may be used while others may configure storage using different RAID levels along with number of drives in RAID sets to meet specific requirements.


    Figure 2:  How various RAID levels and configuration impact or benefit footprint constraints

    Figure 2 shows a summary and tradeoffs of various RAID levels. In addition to the RAID levels, how many disks can also have an impact on performance or capacity, such as, by creating a larger RAID 5 or RAID 6 group, the parity overhead can be spread out, however there is a tradeoff. Tradeoffs can be performance bottlenecks on writes or during drive rebuilds along with potential exposure to drive failures.

    All of this comes back to a balancing act to align to your specific needs as some will go with a RAID 10 stripe and mirror to avoid risks, even going so far as to do triple mirroring along with replication. On the other hand, some will go with RAID 5 or RAID 6 to meet cost or availability requirements, or, some I have talked with even run RAID 0 for data and applications that need the raw speed, yet can be restored rapidly from some other medium.

    Lets bring it all together with an example
    Figure 3 shows a generic example of a before and after optimization for a mixed workload environment, granted you can increase or decrease the applicable capacity and performance to meet your specific needs. In figure 3, the storage configuration consists of one storage system setup for high performance (left) and another for high-capacity secondary (right), disk to disk backup and other near-line needs, again, you can scale the approach up or down to your specific need.

    For the performance side (left), 192 x 146GB 15K RPM (28TB raw) disks provide good performance, however with low capacity use. This translates into a low capacity per watt value however with reasonable IOPs per watt and some performance hot spots.

    On the capacity centric side (right), there are 192 x 1TB disks (192TB raw) with good space utilization, however some performance hot spots or bottlenecks, constrained growth not to mention low IOPS per watt with reasonable capacity per watt. In the before scenario, the joint energy use (both arrays) is about 15 kWh or 15,000 watts which translates to about $16,000 annual energy costs (cooling excluded) assuming energy cost of 12 cents per kWh.

    Note, your specific performance, availability, capacity and energy mileage will vary based on particular vendor solution, configuration along with your application characteristics.


    Figure 3: Baseline before and after storage optimization (raw hardware) example

    Building on the example in figure 3, a combination of techniques along with technologies yields a net performance, capacity and perhaps feature functionality (depends on specific solution) increase. In addition, floor-space, power, cooling and associated footprints are also reduced. For example, the resulting solution shown (middle) comprises 4 x 250GB flash SSD devices, along with 32 x 450GB 15.5K RPM and 124 x 2TB 7200RPM enabling an 53TB (raw) capacity increase along with performance boost.

    The previous example are based on raw or baseline capacity metrics meaning that further optimization techniques should yield improved benefits. These examples should also help to discuss the question or myth that it costs more to power storage than to buy it which the answer should be it depends.

    If you can buy the above solution for say under $50,000 (cost to power), or, let alone, $100,000 (power and cool) for three years which would also be a good acquisition, then the myth of buying is more expensive than powering holds true. However, if a solution as described above costs more, than the story changes along with other variables include energy costs for your particular location re-enforcing the notion that your mileage will vary.

    Another tip is that more is not always better.

    That is, more disks, ports, processors, controllers or cache do not always equate into better performance. Performance is the sum of how those and other pieces working together in a demonstrable way, ideally your specific application workload compared to what is on a product data sheet.

    Additional general tips include:

    • Align the applicable tool, technique or technology to task at hand
    • Look to optimize for both performance and capacity, active and idle storage
    • Consolidated applications and servers need fast servers
    • Fast servers need fast I/O and storage devices to avoid bottlenecks
    • For active storage use an activity per watt metric such as IOP or transaction per watt
    • For in-active or idle storage, a capacity per watt per footprint metric would apply
    • Gain insight and control of how storage resources are used to meet service requirements

    It should go without saying, however sometimes what is understood needs to be restated.

    In the quest to become more efficient and optimized, avoid introducing performance, quality of service or availability issues by moving problems.

    Likewise, look beyond storage space capacity also considering performance as applicable to become efficient.

    Finally, it is all relative in that what might be applicable to one environment or application need may not apply to another.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    I/O Virtualization (IOV) Revisited

    Is I/O Virtualization (IOV) a server topic, a network topic, or a storage topic (See previous post)?

    Like server virtualization, IOV involves servers, storage, network, operating system, and other infrastructure resource management areas and disciplines. The business and technology value proposition or benefits of converged I/O networks and I/O virtualization are similar to those for server and storage virtualization.

    Additional benefits of IOV include:

      • Doing more with what resources (people and technology) already exist or reduce costs
      • Single (or pair for high availability) interconnect for networking and storage I/O
      • Reduction of power, cooling, floor space, and other green efficiency benefits
      • Simplified cabling and reduced complexity for server network and storage interconnects
      • Boosting servers performance to maximize I/O or mezzanine slots
      • reduce I/O and data center bottlenecks
      • Rapid re-deployment to meet changing workload and I/O profiles of virtual servers
      • Scaling I/O capacity to meet high-performance and clustered application needs
      • Leveraging common cabling infrastructure and physical networking facilities

    Before going further, lets take a step backwards for a few moments.

    To say that I/O and networking demands and requirements are increasing is an understatement. The amount of data being generated, copied, and retained for longer periods of time is elevating the importance of the role of data storage and infrastructure resource management (IRM). Networking and input/output (I/O) connectivity technologies (figure 1) tie facilities, servers, storage tools for measurement and management, and best practices on a local and wide area basis to enable an environmentally and economically friendly data center.

    TIERED ACCESS FOR SERVERS AND STORAGE
    There is an old saying that the best I/O, whether local or remote, is an I/O that does not have to occur. I/O is an essential activity for computers of all shapes, sizes, and focus to read and write data in and out of memory (including external storage) and to communicate with other computers and networking devices. This includes communicating on a local and wide area basis for access to or over Internet, cloud, XaaS, or managed services providers such as shown in figure 1.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 1 The Big Picture: Data Center I/O and Networking

    The challenge of I/O is that some form of connectivity (logical and physical), along with associated software is required along with time delays while waiting for reads and writes to occur. I/O operations that are closest to the CPU or main processor should be the fastest and occur most frequently for access to main memory using internal local CPU to memory interconnects. In other words, fast servers or processors need fast I/O, either in terms of low latency, I/O operations along with bandwidth capabilities.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 2 Tiered I/O and Networking Access

    Moving out and away from the main processor, I/O remains fairly fast with distance but is more flexible and cost effective. An example is the PCIe bus and I/O interconnect shown in Figure 2, which is slower than processor-to-memory interconnects but is still able to support attachment of various device adapters with very good performance in a cost effective manner.

    Farther from the main CPU or processor, various networking and I/O adapters can attach to PCIe, PCIx, or PCI interconnects for backward compatibility to support various distances, speeds, types of devices, and cost factors.

    In general, the faster a processor or server is, the more prone to a performance impact it will be when it has to wait for slower I/O operations.

    Consequently, faster servers need better-performing I/O connectivity and networks. Better performing means lower latency, more IOPS, and improved bandwidth to meet application profiles and types of operations.

    Peripheral Component Interconnect (PCI)
    Having established that computers need to perform some form of I/O to various devices, at the heart of many I/O and networking connectivity solutions is the Peripheral Component Interconnect (PCI) interface. PCI is an industry standard that specifies the chipsets used to communicate between CPUs and memory and the outside world of I/O and networking device peripherals.

    Figure 3 shows an example of multiple servers or blades each with dedicated Fibre Channel (FC) and Ethernet adapters (there could be two or more for redundancy). Simply put the more servers and devices to attach to, the more adapters, cabling and complexity particularly for blade servers and dense rack mount systems.
    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 3 Dedicated PCI adapters for I/O and networking devices

    Figure 4 shows an example of a PCI implementation including various components such as bridges, adapter slots, and adapter types. PCIe leverages multiple serial unidirectional point to point links, known as lanes, in contrast to traditional PCI, which used a parallel bus design.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

    Figure 4 PCI IOV Single Root Configuration Example

    In traditional PCI, bus width varied from 32 to 64 bits; in PCIe, the number of lanes combined with PCIe version and signaling rate determine performance. PCIe interfaces can have 1, 2, 4, 8, 16, or 32 lanes for data movement, depending on card or adapter format and form factor. For example, PCI and PCIx performance can be up to 528 MB per second with a 64 bit, 66 MHz signaling rate, and PCIe is capable of over 4 GB (e.g., 32 Gbit) in each direction using 16 lanes for high-end servers.

    The importance of PCIe and its predecessors is a shift from multiple vendors’ different proprietary interconnects for attaching peripherals to servers. For the most part, vendors have shifted to supporting PCIe or early generations of PCI in some form, ranging from native internal on laptops and workstations to I/O, networking, and peripheral slots on larger servers.

    The most current version of PCI, as defined by the PCI Special Interest Group (PCISIG), is PCI Express (PCIe). Backwards compatibility exists by bridging previous generations, including PCIx and PCI, off a native PCIe bus or, in the past, bridging a PCIe bus to a PCIx native implementation. Beyond speed and bus width differences for the various generations and implementations, PCI adapters also are available in several form factors and applications.

    Traditional PCI was generally limited to a main processor or was internal to a single computer, but current generations of PCI Express (PCIe) include support for PCI Special Interest Group (PCI) I/O virtualization (IOV), enabling the PCI bus to be extended to distances of a few feet. Compared to local area networking, storage interconnects, and other I/O connectivity technologies, a few feet is very short distance, but compared to the previous limit of a few inches, extended PCIe provides the ability for improved sharing of I/O and networking interconnects.

    I/O VIRTUALIZATION(IOV)
    On a traditional physical server, the operating system sees one or more instances of Fibre Channel and Ethernet adapters even if only a single physical adapter, such as an InfiniBand HCA, is installed in a PCI or PCIe slot. In the case of a virtualized server for example, Microsoft HyperV or VMware ESX/vSphere the hypervisor will be able to see and share a single physical adapter, or multiple adapters, for redundancy and performance to guest operating systems. The guest systems see what appears to be a standard SAS, FC or Ethernet adapter or NIC using standard plug-and-play drivers.

    Virtual HBA or virtual network interface cards (NICs) and switches are, as their names imply, virtual representations of a physical HBA or NIC, similar to how a virtual machine emulates a physical machine with a virtual server. With a virtual HBA or NIC, physical NIC resources are carved up and allocated as virtual machines, but instead of hosting a guest operating system like Windows, UNIX, or Linux, a SAS or FC HBA, FCoE converged network adapter (CNA) or Ethernet NIC is presented.

    In addition to virtual or software-based NICs, adapters, and switches found in server virtualization implementations, virtual LAN (VLAN), virtual SAN (VSAN), and virtual private network (VPN) are tools for providing abstraction and isolation or segmentation of physical resources. Using emulation and abstraction capabilities, various segments or sub networks can be physically connected yet logically isolated for management, performance, and security purposes. Some form of routing or gateway functionality enables various network segments or virtual networks to communicate with each other when appropriate security is met.

    PCI-SIG IOV
    PCI SIG IOV consists of a PCIe bridge attached to a PCI root complex along with an attachment to a separate PCI enclosure (Figure 5). Other components and facilities include address translation service (ATS), single-root IOV (SR IOV), and multiroot IOV (MR IOV). ATS enables performance to be optimized between an I/O device and a servers I/O memory management. Single root, SR IOV enables multiple guest operating systems to access a single I/O device simultaneously, without having to rely on a hypervisor for a virtual HBA or NIC.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

    Figure 5 PCI SIG IOV

    The benefit is that physical adapter cards, located in a physically separate enclosure, can be shared within a single physical server without having to incur any potential I/O overhead via virtualization software infrastructure. MR IOV is the next step, enabling a PCIe or SR IOV device to be accessed through a shared PCIe fabric across different physically separated servers and PCIe adapter enclosures. The benefit is increased sharing of physical adapters across multiple servers and operating systems not to mention simplified cabling, reduced complexity and resource utilization.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 6 PCI SIG MR IOV

    Figure 6 shows an example of a PCIe switched environment, where two physically separate servers or blade servers attach to an external PCIe enclosure or card cage for attachment to PCIe, PCIx, or PCI devices. Instead of the adapter cards physically plugging into each server, a high performance short-distance cable connects the servers PCI root complex via a PCIe bridge port to a PCIe bridge port in the enclosure device.

    In figure 6, either SR IOV or MR IOV can take place, depending on specific PCIe firmware, server hardware, operating system, devices, and associated drivers and management software. For a SR IOV example, each server has access to some number of dedicated adapters in the external card cage, for example, InfiniBand, Fibre Channel, Ethernet, or Fibre Channel over Ethernet (FCoE) and converged networking adapters (CNA) also known as HBAs. SR IOV implementations do not allow different physical servers to share adapter cards. MR IOV builds on SR IOV by enabling multiple physical servers to access and share PCI devices such as HBAs and NICs safely with transparency.

    The primary benefit of PCI IOV is to improve utilization of PCI devices, including adapters or mezzanine cards, as well as to enable performance and availability for slot-constrained and physical footprint or form factor-challenged servers. Caveats of PCI IOV are distance limitations and the need for hardware, firmware, operating system, and management software support to enable safe and transparent sharing of PCI devices. Examples of PCIe IOV vendors include Aprius, NextIO and Virtensys among others.

    InfiniBand IOV
    InfiniBand based IOV solutions are an alternative to Ethernet-based solutions. Essentially, InfiniBand approaches are similar, if not identical, to converged Ethernet approaches including FCoE, with the difference being InfiniBand as the network transport. InfiniBand HCAs with special firmware are installed into servers that then see a Fibre Channel HBA and Ethernet NIC from a single physical adapter. The InfiniBand HCA also attaches to a switch or director that in turn attaches to Fibre Channel SAN or Ethernet LAN networks.

    The value of InfiniBand converged networks are that they exist today, and they can be used for consolidation as well as to boost performance and availability. InfiniBand IOV also provides an alternative for those who do not choose to deploy Ethernet.

    From a power, cooling, floor-space or footprint standpoint, converged networks can be used for consolidation to reduce the total number of adapters and the associated power and cooling. In addition to removing unneeded adapters without loss of functionality, converged networks also free up or allow a reduction in the amount of cabling, which can improve airflow for cooling, resulting in additional energy efficiency. An example of a vendor using InfiniBand as a platform for I/O virtualization is Xsigo.

    General takeaway points include the following:

    • Minimize the impact of I/O delays to applications, servers, storage, and networks
    • Do more with what you have, including improving utilization and performance
    • Consider latency, effective bandwidth, and availability in addition to cost
    • Apply the appropriate type and tiered I/O and networking to the task at hand
    • I/O operations and connectivity are being virtualized to simplify management
    • Convergence of networking transports and protocols continues to evolve
    • PCIe IOV is complimentary to converged networking including FCoE

    Moving forward, a revolutionary new technology may emerge that finally eliminates the need for I/O operations. However until that time, or at least for the foreseeable future, several things can be done to minimize the impacts of I/O for local and remote networking as well as to simplify connectivity.

    PCIe Fundamentals Server Storage I/O Network Essentials

    Learn more about IOV, converged networks, LAN, SAN, MAN and WAN related topics in Chapter 9 (Networking with your servers and storage) of The Green and Virtual Data Center (CRC) as well as in Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier).

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Storage Efficiency and Optimization – The Other Green

    For those of you in the New York City area, I will be presenting live in person at Storage Decisions September 23, 2009 conference The Other Green, Storage Efficiency and Optimization.

    Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives.

    To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

    Topics, technologies and techniques that will be discussed include among others:

    • Energy efficiency (strategic) vs. energy avoidance (tactical), whats different between them
    • Optimization and the need for speed vs. the need for capacity, finding the right balance
    • Metrics & measurements for management insight, what the industry is doing (or not doing)
    • Tiered storage and tiered access including SSD, FC, SAS, tape, clouds and more
    • Data footprint reduction (archive, compress, dedupe) and thin provision among others
    • Best practices, financial incentives and what you can do today

    This is a free event for IT professionals, however space I hear is limited, learn more and register here.

    For those interested in broader IT data center and infrastructure optimization, check out the on-going seminar series The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. This is also a free Seminar, register and learn more here or here.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Storage Decisions Spring 2009 Sessions Update

    StorageDecisions Logo

    The conference lineup and details for the Spring 2009 Storage Decisions event (June 1st and 2nd) in Chicago is coming together including two talks/presentations that I will be doing. One will be in Track 2 (Disaster Recovery) titled "Server Virtualization, Business Continuance and Disaster Recovery" and the other in Track 6 (Management/Executive) titled "The Other Green — Storage Efficiency and Optimization" with both sessions leveraging themes and topics from my new book "The Green and Virtual Data Center" (CRC).

    Track 2: Disaster Recovery
    Server Virtualization, Business Continuance and Disaster Recovery
    Presented by Greg Schulz, Founder and Senior Analyst, StorageIO
    Server virtualization has the potential to bring sophisticated business continuance (BC) and disaster recovery (DR) techniques to organizations that previously didn’t have the means to adopt them. Likewise, virtualized as well as cloud environments need to be included in a BC/DR plan to enable application and data availability. Learn tips and tricks on building an accessible BC/DR strategy and plan using server virtualization and the storage products that enable efficient, flexible green and virtual data centers.

    Topics include:
    * Cross technology domain data protection management
    * Tiered data protection to stretch your IT budget dollar
    * What’s needed to enable BC/DR for virtualized environments
    * How virtualization can enable BC/DR for non-virtualized environments
    * General HA, BC/DR and data protection tips for virtual environments

    Track 6: Management/Executive
    The Other Green — Storage Efficiency and Optimization
    Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives. To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

    Topics include:
    * Energy efficiency (strategic) vs. energy avoidance (tactical)
    * Optimization and the need for speed vs. the need for capacity
    * Metrics and measurements for management insight
    * Tiered storage and tiered access including SSD, FC, SAS and clouds
    * Data footprint reduction (archive, compress, dedupe) and thin provision
    * Best practices, financial incentives and what you can do today

    See you in Chicago in June if not before then. Learn more about other upcoming events and activities on the StorageIO events page.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    On The Road Again: An Update

    A while back, I posted about a busy upcoming spring schedule of activity and events, and then a few weeks ago, posted an update, so this can be considered the latest "On The Road Again" update. While the economy continues to be in rough condition and job reductions or layoffs continuing, or, reduction in hours or employees being asked to take time off without pay or to take sabbaticals, not to mention the race to get the economic stimulus bill passed, for many people, business and life goes on.

    Airport parking lots have plenty of cars in them, airplanes while not always full, are not empty (granted there has been some fleet optimization aka aligning capacity to best suited tier of aircraft and other consolidation or capacity improvements). Many organizations cutting back on travel and entertainment (T&E) spending, either to watch the top and bottom line, avoid being perceived or seen on the news as having employees going on junkets when they may in fact being going to conferences, seminars, conventions or other educational and related events to boost skills and seek out ways to improve business productivity.

    One of the reason that I have a busy travel schedule in addition to my normal analyst and consulting activities is that many events and seminars are being scheduled close to, or in the cities where IT professionals are located who might otherwise have T&E restrictions or other constraints from traveling to industry events, some of which are or will be impacted by recent economic and business conditions.

    Last week I was invited to attend and speak at the FujiFilm Executive Seminar, no private jets were used or seen, travel was via scheduled air carriers (coach air-fare). FujiFilm has a nice program for those interested in or involved with tape whether for disk to tape backup, disk to disk to tape, long term archive, bulk storage and other scenarios involving the continued use and changing roles of tape as a green data storage medium for in-active or off-line data. Check out FujiFilm TapePower Center portal.

    This past week I was in the big "D", that’s Dallas Texas to do another TechTarget Dinner event around the theme of BC/DR, Virtualization and IT optimization. The session was well attended by a diverse audience of IT professionals from around the DFW metroplex. Common themes included discussions about business and economic activity as well as the need to keep business and IT running even when budgets are being stretched further and further. Technology conversations included server and storage virtualization, tiered storage including SSD, fast FC and SAS disk drives, lower performance high capacity "fat" disk drives as well as tape not to mention tiered data protection, tiered servers and other related items.

    The Green Gap continues to manifest itself in that when asked, most people do not have Green IT initiatives, however, when asked they do have power, cooling, floor-space, environmental (PCFE) or business economic sustainability concerns, aka, the rest of the Green story.

    While some attendees have started to use some new technologies including dedupe technology, most I find are still using a combination of disk and tape with some considering dedupe for the future for certain applications. Other technologies and trends being watched, however also ones with concerns as to their stability and viability for enterprise use include FLASH based SSD, Cloud computing and thin provisioning among others. Common themes I hear from IT professionals are that these are technologies and tools to keep an eye on, or, use on a selective basis and are essentially tiered resources to have in a tool box of technologies to apply to different tasks to meet various service requirements. Hopefully the Cowboys can put a fraction of the amount of energy and interest into and improving their environment that the Dallas area IT folks are applying to their environments, especially given the strained IT budgets vs. the budget that the Cowboys have to work with for their player personal.

    I always find it interesting when talking to groups of IT professionals which tend to be enterprise, SME and SMB hearing what they are doing and looking at or considering which often is in stark contrast to some of the survey results on technology adoption trends one commonly reads or hears about. Hummm, nuff said, what say you?

    Hope to see you at one of the many upcoming events perhaps coming to a venue near you.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    DAS, SAS, FCoE, Green Efficient Storage and I/O Podcast & FAQs

    Storage I/O trends

    Here are some links to several recent podcast and FAQs pertaining to various popular technolgies and trends.

    Fibre Channel over Ethernet (FCoE) FAQs

    Direct Attached Storage for SMB and other enviromnets that do not need networked (SAN or NAS) storage.

    Green and Energy Efficient Storage as well as FCoE and related topics

    Along with several other topics found here.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Will 6Gb SAS kill Fibre Channel?

    Storage I/O trends

    With the advent of 6Gb SAS (Serial Attached SCSI) which doubles the speed from earlier 3Gb along with other enhancements including longer cable distances up to 10m, does this mean that Fibre Channel will be threatened? Well, I’m sure some conspiracy theorist or iSCSI die hards might jump up and down and say yes, finally, even though some of the FCoE cheering section has already arranged a funeral or wake for FC even while Converged enhanced Ethernet based Fibre Channel over Ethernet (FCoE) and its complete ecosystem completely evolves.

    Needless to say, SAS will be in your future, it may not be as a host server to storage system interconnect, however look for SAS high performance drives to appear sometime in the not so distant future. While over time, Fibre Channel based high performance disk drives can be expected to give way to SAS based disks, similar to how Parralel SCSI or even IBM SSA drives gave way to FC disks, SAS as a server to storage system interconnect will at leat for the forseeable future be more for smaller configurations, direct connect storage for blade centers, two server clusters, extremely cost sensitive environments that do not need or can afford a more expensive iSCSI, NAS let alone an FC or FCoE based solution.

    So while larger storage systems over time can be expected to support high performance 3.5″ and 2.5″ SAS disks to replace FC disks, those systems will be accessed via FCoE, FC, iSCSI or NAS while mid-range and entry-level systems as they do today will see a mix of SAS, iSCSI, FC, NAS and in the future, some FCoE as well not to mention some InfiniBand based NAS or SRP for block access.

    From an I/O virtualization (IOV) standpoint, keep an eye on whats taking place with the PCI SIG and Single Root IOV and multi-root IOV from a server I/O and I/O virtualization standpoint.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Just When You Thought It Was Safe To Go In The Water Again!

    In the shark infested waters where I/O and networking debates often rage, the Fibre Channel vs. iSCSI, or, is that iSCSI vs. Fibre Channel debates continue which is about as surprising as an ice berg melting because it floated into warmer water or hot air in the tropics.

    Here’s a link to an article at Processor.com by Kurt Marko “iSCSI vs. Fibre Channel: A Cost Comparison iSCSI Targets the Low-End SAN, But Are The Cost Advantages Worth The Performance Trade-offs?” that looks at a recent iSCSI justification report and some additional commentary about apples to oranges comparisons by me.

    Here’s the thing, no one in their right mind would try to refute that iSCSI at 1GbE levering built-in server NICs and standard Ethernet switches and operating system supplied path managers is cheaper than say 4Gb Fibre Channel or even legacy 1Gb and 2Gb Fibre Channel. However that’s hardly an apple to apples comparison.

    A more interesting comparison is for example 10GbE iSCSI compared to 1GbE iSCSI (again not a fair comparison), or, look at for example the new solution from HP and Qlogic that for about $8,200 USD, you get a 8Gb FC switch with a bunch of ports for expansion, four (4) PCIe 8Gb FC adapters plus cables plus transceiver optics which while not as cheap as 1GbE ports built into a server or an off the shelf Ethernet switch, is a far cry from the usual apples to oranges no cost Ethernet NICs vs. $1,500 FC adapters and high price FC director ports.

    To be fair, put this into comparison with 10GbE adapters (and probably not a real apples to apples comparison at that) which on CDW go from about $600 USD (without no transceivers) to $1,100 to $1,500 for single port with transceivers or about $2,500 to $3,000 or more for dual or multi-port.

    So the usual counter argument to trying to make a more apples to apples comparison is that iSCSI deployments do not need the performance of 10GbE or 8GbE Fibre Channel which is very valid, however then a comparison should be iSCSI vs. NAS.

    Here’s the bottom line, I like iSCSI for its target markets and see lots of huge upside and growth opportunity just like I see a continued place for Fibre Channel and moving forward FCoE leveraging Ethernet as the common denominator (at least for now) as well as NAS for data sharing and SAS for small deployments requiring shared storage (assuming a shared SAS array that is).

    I?m a fan of using the right technology or tool for the task at hand and if that gets me in trouble with the iSCSI purist who wants everything on iSCSI, well, too bad, so be it. Likewise, if the FC police are not happy that I?m not ready and willing to squash out the evil iSCSI, well, too bad, get over it, same with NAS, InfiniBand and SAS and that’s not to mean I don?t take a side or preference, rather, applied to the right task at hand, I?m a huge fan of these and other technologies and hence the discussion about apples to apples comparisons and applicability.

    Cheers
    GS