DIY converged server software defined storage on a budget using Lenovo TS140

Attention DIY Converged Server Storage Bargain Shoppers

Software defined storage on a budget with Lenovo TS140

server storage I/O trends

Recently I put together a two-part series of some server storage I/O items to get a geek for a gift (read part I here and part II here) that also contain items that can be used for accessorizing servers such as the Lenovo ThinkServer TS140.

Image via Lenovo.com

Likewise I have done reviews of the Lenovo ThinkServer TS140 in the past which included me liking them and buying some (read the reviews here and here), along with a review of the larger TD340 here.

Why is this of interest

Do you need or want to do a Do It Yourself (DIY) build of a small server compute cluster, or a software defined storage cluster (e.g. scale-out), or perhaps a converged storage for VMware VSAN, Microsoft SOFS or something else?

Do you need a new server, second or third server, or expand a cluster, create a lab or similar and want the ability to tailor your system without shopping or a motherboard, enclosure, power supply and so forth?

Are you a virtualization or software defined person looking to create a small VMware Virtual SAN (VSAN) needing three or more servers to build a proof of concept or personal lab system?

Then the TS140 could be a fit for you.

storage I/O Lenovo TS140
Image via StorageIOlabs, click to see review

Why the Lenovo TS140 now?

Recently I have seen a lot of site traffic on my site with people viewing my reviews of the Lenovo TS140 of which I have a few. In addition have got questions from people via comments section as well as elsewhere about the TS140 and while shopping at Amazon.com for some other things, noticed that there were some good value deals on different TS140 models.

I tend to buy the TS140 models that are bare bones having power supply, enclosure, CD/DVD, USB ports, power supply and fan, processor and minimal amount of DRAM memory. For processors mine have the Intel E3-1225 v3 which are quad-core and that have various virtualization assist features (e.g. good for VMware and other hypervisors).

What I saw on Amazon the other day (also elsewhere) were some Intel i3-4130 dual core based systems (these do not have all the virtualization features, just the basics) in a bare configuration (e.g. no Hard Disk Drive (HDD), 4GB DRAM, processor, mother board, power supply and fan, LAN port and USB with a price of around $220 USD (your price may vary depending on timing, venue, prime or other membership and other factors). Not bad for a system that you can tailor to your needs. However what also caught my eye were the TS140 models that have the Intel E3-1225 v3 (e.g. quad core, 3.2Ghz) processor matching the others I have with a price of around $330 USD including shipping (your price will vary depending on venue and other factors).

What are some things to be aware of?

Some caveats of this solution approach include:

  • There are probably other similar types of servers, either by price, performance, or similar
  • Compare apples to apples, e.g. same or better processor, memory, OS, PCIe speed and type of slots, LAN ports
  • Not as robust of a solution as those you can find costing tens of thousands of dollars (or more)
  • A DIY system which means you select the other hardware pieces and handle the service and support of them
  • Hardware platform approach where you choose and supply your software of choice
  • For entry-level environments who have floor-space or rack-space to accommodate towers vs. rack-space or other alternatives
  • Software agnostic Based on basically an empty server chassis (with power supplies, motherboard, power supplies, PCIe slots and other things)
  • Possible candidate for smaller SMB (Small Medium Business), ROBO (Remote Office Branch Office), SOHO (Small Office Home Office) or labs that are looking for DIY
  • A starting place and stimulus for thinking about doing different things

What could you do with this building block (e.g. server)

Create a single or multi-server based system for

  • Virtual Server Infrastructure (VSI) including KVM, Microsoft Hyper-V, VMware ESXi, Xen among others
  • Object storage
  • Software Defined Storage including Datacore, Microsoft SOFS, Openstack, Starwind, VMware VSAN, various XFS and ZFS among others
  • Private or hybrid cloud including using Openstack among other software tools
  • Create a hadoop big data analytics cluster or grid
  • Establish a video or media server, use for gaming or a backup (data protection) server
  • Update or expand your lab and test environment
  • General purpose SMB, ROBO or SOHO single or clustered server

VMware VSAN server storageIO example

What you need to know

Like some other servers in this class, you need to pay attention to what it is that you are ordering, check out the various reviews, comments and questions as well as verify the make, model along with configuration. For example what is included and what is not included, warranty, return policy among other things. In the case of some of the TS140 models, they do not have a HDD, OS, keyboard, monitor, mouse along with different types of processors and memory. Not all the processors are the same, pay attention, visit the Intel Ark site to look up a specific processor configuration to see if it fits your needs as well as visit the hardware compatibility list (HCL) for the software that you are planning to use. Note that these should be best practices regardless of make, model, type or vendor for server, storage, I/O networking hardware and software.

What you will need

This list assumes that you have obtained a model without a HDD, keyboard, video, mouse or operating system (OS) installed

  • Update your BIOS if applicable, check the Lenovo site
  • Enable virtualization and other advanced features via your BIOS
  • Software such as an Operating System (OS), hypervisor or other distribution (load via USB or CD/DVD if present)
  • SSD, SSHD/HHDD, HDD or USB flash drive for installing OS or other software
  • Keyboard, video, mouse (or a KVM switch)

What you might want to add (have it your way)

  • Keyboard, video mouse or a KVM switch (See gifts for a geek here)
  • Additional memory
  • Graphics card, GPU or PCIe riser
  • Additional SSD, SSHD/HHDD or HDD for storage
  • Extra storage I/O and networking ports

Extra networking ports

You can easily add some GbE (or faster ports) including use the PCIe x1 slot, or use one of the other slots for a quad port GbE (or faster), not to mention get some InfiniBand single or dual port cards such as the Mellanox Connectx II or Connect III that support QDR and can run in IBA or 10GbE modes. If you only have two or three servers in a cluster, grid, ring configuration you can run point to point topologies using InfiniBand (and some other network interfaces) without using a switch, however you decide if you need or want switched or non-switched (I have a switch). Note that with VMware (and perhaps other hypervisors or OS) you may need to update the drives for the Realtek GbE LAN on Motherboard port (see links below).

Extra storage ports

For extra storage space capacity (and performance) you can easily add PCIe G2 or G3 HBAs (SAS, SATA, FC, FCoE, CNA, UTA, IBA for SRP, etc) or RAID cards among others. Depending on your choice of cards, you can then attach to more internal storage, external storage or some combination with different adapters, cables, interposers and connectivity options. For example I have used TS140s with PCIe Gen 3 12Gbs SAS HBAs attached to 12Gbs SAS SSDs (and HDDs) with the ability to drive performance to see what those devices are capable of doing.

TS140 Hardware Defined My Way

As an example of how a TS140 can be configured, using one of the base E3-1224 v3 models with 4GB RAM, no HDD (e.g around $330 USD, your price will vary), add a 4TB Seagate HDD (or two or three) for around $140 USD each (your price will vary), add a 480GB SATA SSD for around $340 USD (your price will vary) with those attached to the internal SATA ports. To bump up network performance, how about a Mellanox Connectx II dual port QDR IBA/10GbE card for around $140 USD (your price will vary), plus around $65 USD for QSFP cable (you your price will vary), and some extra memory (use what you have or shop around) and you have a platform ready to go for around or under $1,000 USD. Add some more internal or external disks, bump up the memory, put in some extra network adapters and your price will go up a bit, however think about what you can have for a robust not so little system. For you VMware vgeeks, think about the proof of concept VSAN that you can put together, granted you will have to do some DIY items.

Some TS140 resources

Lenovo TS140 resources include

  • TS140 StorageIOlab review (here and here)
  • TS140 Lenovo ordering website
  • TS140 Data and Spec Sheet (PDF here)
  • Lenovo ThinkServer TS140 Manual (PDF here) and (PDF here)
  • Intel E3-1200 v3 processors capabilities (Web page here)
  • Enabling Virtualization Technology (VT) in TS140 BIOS (Press F1) (Read here)
  • Enabling Intel NIC (82579LM) GbE with VMware (Link to user forum and a blog site here)

Image via Lenovo.com

What this all means

Like many servers in its category (price, capabilities, abilities, packaging) you can do a lot of different things with them, as well as hardware define with accessories, or use your own software. Depending on how you end how hardware defining the TS140 with extra memory, HDDs, SSDs, adapters or other accessories and software your cost will vary. However you can also put together a pretty robust system without breaking your budget while meeting different needs.

Is this for everybody? Nope

Is this for more than a lab, experimental, hobbyist, gamer? Sure, with some caveats Is this apples to apples comparison vs. some other solutions including VSANs? Nope, not even close, maybe apples to oranges.

Do I like the TS140? Yup, starting with a review I did about a year ago, I liked it so much I bought one, then another, then some more.

Are these the only servers I have, use or like? Nope, I also have systems from HP and Dell as well as test drive and review others

Why do I like the TS140? It’s a value for some things which means that while affordable (not to be confused with cheap) it has features, salability and ability to be both hardware defined for what I want or need to use them as, along with software define them to be different things. Key for me is the PCIe Gen 3 support with multiple slots (and types of slots), reasonable amount of memory, internal housing for 3.5" and 2.5" drives that can attach to on-board SATA ports, media device (CD/DVD) if needed, or remove to use for more HDDs and SSDs. In other words, it’s a platform that instead of shopping for the motherboard, an enclosure, power supply, processor and related things I get the basics, then configure, and reconfigure as needed.

Another reason I like the TS140 is that I get to have the server basically my way, in that I do not have to order it with a smallest number of HDDs, or that it comes with an OS, more memory than needed or other things that I may or may not be able to use. Granted I need to supply the extra memory, HDDs, SSDs, PCIe adapters and network ports along with software, however for me that’s not too much of an issue.

What don’t I like about the TS140? You can read more about my thoughts on the TS140 in my review here, or its bigger sibling the TD340 here, however I would like to see more memory slots for scaling up. Granted for what these cost, it’s just as easy to scale-out and after all, that’s what a lot of software defined storage prefers these days (e.g. scale-out).

The TS140 is a good platform for many things, granted not for everything, that’s why like storage, networking and other technologies there are different server options for various needs. Exercise caution when doing apples to oranges comparison on price alone, compare what you are getting in terms of processor type (and its functionality), expandable memory, PCIe speed, type and number of slots, LAN connectivity and other features to meet your needs or requirements. Also keep in mind that some systems might be more expensive that include a keyboard, HDD with an OS installed that if you can use those components, then they have value and should be factored into your cost, benefit, return on investment.

And yes, I just added a few more TS140s that join other recent additions to the server storageIO lab resources…

Anybody want to guess what I will be playing with among other things during the up coming holiday season?

Ok, nuff said, for now…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Where has the FCoE hype and FUD gone? (with poll)

Storage I/O cloud virtual and big data perspectives

A couple of years ago I did this post about if Is FCoE Struggling to Gain Traction, or on a normal adoption course?

Fast forward to today, has anybody else noticed that there seems to be less hype and fud on Fibre Channel (FC) over Ethernet (FCoE) than a year or two or three ago?

Does this mean that FCoE as the fud or detractors were predicting is in fact stillborn with no adoption, no deployment and dead on arrival?

Does this mean that FCoE as its proponents have said is still maturing, quietly finding adoption and deployment where it fits?

Does this mean that FCoE like its predecessors Fibre Channel and Ethernet are still evolving, expanding from early adopter to a mature technology?

Does this mean that FCoE is simply forgotten with software defined networking (SDN) having over-shadowed it?

Does this mean that FCoE has finally lost out and that iSCSI has finally stepped up and living up to what it was hyped to do ten years ago?

Does this mean that FC itself at either 8GFC or 16GFC is holding its own for now?

Does this mean that InfiniBand is on the rebound?

Does this mean that FCoE is simply not fun or interesting, or a shiny new technology with vendors not spending marketing money so thus people not talking, tweeting or blogging?

Does this mean that those who were either proponents pitching it or detractors despising it have found other things to talk about from SDN to OpenFlow to IOV to Software Defined Storage (what ever, or who ever definition your subscribe to) to cloud, big or little data and the list goes on?

I continue hear of or talk with customers organizations deploying FCoE in addition to iSCSI, FC, NAS and other means of accessing storage for cloud, virtual and physical environments.

Likewise I see some vendor discussions occurring not to mention what gets picked up via google alerts.

However in general, the rhetoric both pro and against, hype and FUD seems to have subsided, or at least for now.

So what gives, what’s your take on FCoE hype and FUD?

Cast your vote and see results here.

 

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go

StorageIO industry trends and perspectives

In case you missed it, VMware recently announced spending $1.05 billion USD acquiring startup Nicira for their virtualization and software technology that enables software defined networks (SDN). Also last week Oracle was in the news getting its hands slapped by for making misleading advertisement performance claims vs. IBM.

On the heals of VMware buying Nicira for software defined networking (SDN) or what is also known as IO virtualization (IOV) and virtualized networking, Oracle is now claiming their own SDN capabilities with their announcement of intent to acquire Xsigo. Founded in 2004, Xsigo has a hardware platform combined with software to enable attachment of servers to different Fibre Channel (SAN) and Ethernet based (LAN) networks with their version of IOV.

Now its Oracle who has announced that it will be acquiring IO, networking, virtualization hardware and software vendor Xsigo founded in 2004 for an undisclosed amount. Xsigo has made its name in the IO virtualization (IOV) and converged networking along with server and storage virtualization space over the past several years including partnerships with various vendors.

Buzz word bingo

Technology buzzwords and buzz terms can often be a gray area leaving plenty of room for marketers and PR folks to run with. Case in point AaaS, Big data, Cloud, Compliance, Green, IaaS, IOV, Orchestration, PaaS and Virtualization among other buzzword bingo or XaaS topics. Since Xsigo has been out front in messaging and industry awareness around IO networking convergence of Ethernet based Local Area Networks (LANs) and Fibre Channel (FC) based Storage Area Networks (SANs), along with embracing InfiniBand, it made sense for them to play to their strength which is IO virtualization (aka IOV).

Too me and among others (here and here and here) it is interesting that Xsigo has not laid claims to being part of the software defined networking (SDN) movement or the affiliated OpenFlow networking initiatives as happens with Nicira (and Oracle for that matter). In the press release that the Oracle marketing and PR folks put out on a Monday morning, some of the media and press, both trade industry, financial and general news agency took the Oracle script hook line and sinker running with it.

What was effective is how well many industry trade pubs and their analysts simply picked up the press release story and ran with it in the all too common race to see who can get the news or story out first, or before it actually happens in some cases.

Image of media, news papers

Too be clear, not all pubs jumped including some of those mentioned by Greg Knieriemen (aka @knieriemen) over at SpeakinginTech highlights. I know some who took the time to call, ask around, leverage their journalistic training to dig, research and find out what this really meant vs. simply taking and running with the script. An example of one of those calls that I had was with Beth Pariseu (aka @pariseautt) that you can read her story here and here.

Interesting enough, the Xsigo marketers had not embraced the SDN term sticking with the more known (at least in some circles) VIO and VIO descriptions. What is also interesting is just last week Oracle marketing had their hands slapped by the Better Business Bureau (BBB) NAD after IBM complained about unfair performance based advertisements on ExaData.

Oracle Exadata

Hmm, I wonder if the SDN police or somebody else will lodge a similar complaint with the BBB on behalf of those doing SDN?

Both Oracle and Xsigo along with other InfiniBand (and some Ethernet and PCIe) focused vendors are members of the Open Fabric initiative, not to be confused with the group working on OpenFlow.

StorageIO industry trends and perspectives

Here are some other things to think about:

Oracle has a history of doing different acquisitions without disclosing terms, as well as doing them based on earn outs such as was the case with Pillar.

Oracle use Ethernet in the servers and appliances as well as has been an adopter of InfiniBand primarily for node to node communication, however also for server to application.

Oracle is also an investor in Mellanox the folks that make InfiniBand and Ethernet products.

Oracle has built various stacks including ExaData (Database machine), Exalogic, Exalytics and Database Appliance in addition to their 7000 series of storage systems.

Oracle has done earlier virtualization related acquisitions including Virtual Iron.

Oracle has a reputation with some of their customers who love to hate them for various reasons.

Oracle has a reputation of being aggressive, even by other market leader aggressive standards.

Integrated solution stacks (aka stack wars) or what some remember as bundles continues and Oracle has many solutions.

What will happen to Xsigo as you know it today (besides what the press releases are saying).

While Xsigo was not a member of the Open Networking Forum (ONF), Oracle is.

Xsigo is a member of the Open Fabric Alliance along with Oracle, Mellanox and others interested in servers, PCIe, InfiniBand, Ethernet, networking and storage.

StorageIO industry trends and perspectives

What’s my take?

While there are similarities in that both Nicira and Xsigo are involved with IO Virtualization, what they are doing, how they are doing it, who they are doing it with along with where they can play vary.

Not sure what Oracle paid however assuming that it was in the couple of million dollars or less, cash or combination of stock, both they and the investors as well as some of the employees, friends and family’s did ok.

Oracle also gets some intellectual property that they can combine with other earlier acquisitions via Sun and Virtual Iron along with their investment in InfiniBand (also now Ethernet) vendor Mellanox

Likewise, Oracle gets some extra technology that they can leverage in their various stacked or integrated (aka bundled) solutions for both virtual and physical environments.

For Xsigo customers the good news is that you now know who will be buying the company, however and should be questions about the future beyond what is being said in press releases.

Does this acquisition give Oracle a play in the software defined networking space like Nicira gives to VMware I would say no given their hardware dependency, however it does give Oracle some extra technology to play with.

Likewise while important and a popular buzzword topic (e.g. SDN), since OpenFlow comes up in conversations, perhaps that should be more of the focus vs. if a solution is all software or hardware and software.

StorageIO industry trends and perspectives

I also find it entertaining how last week the Better Business Bureau (BBB) and NAD (National Advertising Division) slapped Oracles hands after IBM complaints of misleading performance claims about Oracle ExaData vs. IBM. The reason I find it entertaining is not that Oracle had its hands slapped or that IBM complained to the BBB, rather how the Oracle marketers and PR folks came up with a spin around what could be called a proprietary SDN (hmm, pSDN ?) story feed it to the press and media who then ran with it.

Im not convinced that this is an all our launch of a war by Oracle vs. Cisco let alone any of the other networking vendors as some have speculated (makes for good headlines though). Instead Im seeing it as more of an opportunistic acquisition by Oracle most likely at a good middle of summer price. Now if Oracle really wanted to go to battle with Cisco (and others), then there are others to buy such as Brocade, Juniper, etc etc etc. However there are other opportunities for Oracle to be focused (or side tracked on right now).

Oh, lets also see what Cisco has to say about all of this which should be interesting.

Additional related links:
Data Center I/O Bottlenecks Performance Issues and Impacts
I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
I/O Virtualization (IOV) Revisited
Industry Trends and Perspectives: Converged Networking and IO Virtualization (IOV)
The function of XaaS(X) Pick a letter
What is the best kind of IO? The one you do not have to do
Why FC and FCoE vendors get beat up over bandwidth?

StorageIO industry trends and perspectives

If you are interested in learning more about IOV, Xisgo, or are having trouble sleeping, click here, here, here, here, here, here, here, here, here, here, here, here, here, or here (I think that’s enough links for now ;).

Ok, nuff said for now as I have probably requalified for being on the Oracle you know what list for not sticking to the story script, opps, excuse me, I mean press release message.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Congratulations to IBM for releasing XIV SPC results

Over the past several years I have done an annual post about IBM and their XIV storage system and this is the fourth in what has become a series. You can read the first one here, the second one here, and last years here and here after the announcement of the IBM V7000.

IBM XIV Gen3
IBM recently announced the generation 3 or Gen3 version of XIV along with releasing for the first time public performance comparison benchmarks using storage performance council (SPC) throughout SPC2 workload.

The XIV Gen3 is positioned by IBM as having up to four (4) times the performance of earlier generations of the storage system. In terms of speeds and feeds, the Gen3 XIV supports up to 180 2TB SAS hard disk drives (HDD) that provides up to 161TB of usable storage space capacity. For connectivity, the Gen3 XIV supports up to 24 8Gb Fibre Channel (8GFC) or for iSCSI 22 1Gb Ethernet (1 GbE) ports with a total of up to 360GBytes of system cache. In addition to the large cache to boost performance, other enhancements include leveraging multi core processors along with an internal InfiniBand  network to connect nodes replacing the former 1 GbE interconnect. Note, InfiniBand is only used to interconnect the various nodes in the XIV cluster and is not used for attachment to applications servers which is handled via iSCSI and Fibre Channel.

IBM and SPC storage performance history
IBM has a strong history if not leading the industry with benchmarking and workload simulation of their storage systems including Storage Performance Council (SPC) among others. The exception for IBM over the past couple of years has been the lack of SPC benchmarks for XIV. Last year when IBM released their new V7000 storage system benchmarks include SPC were available close to if not at the product launch. I have in the past commented about IBMs lack of SPC benchmarks for XIV to confirm their marketing claims given their history of publishing results for all of their other storage systems. Now that IBM has recently released SPC2 results for the XIV it is only fitting then that I compliment them for doing so.

Benchmark brouhaha
Performance workload simulation results can often lead to applies and oranges comparisons or benchmark brouhaha battles or storage performance games. For example a few years back NetApp submitted a SPC performance result on behalf of their competitor EMC. Now to be clear on something, Im not saying that SPC is the best or definitive benchmark or comparison tool for storage or other purpose as it is not. However it is representative and most storage vendors have released some SPC results for their storage systems in addition to TPC and Microsoft ESRP among others. SPC2 is focused on streaming such as video, backup or other throughput centric applications where SPC1 is centered around IOPS or transactional activity. The metrics for SPC2 are Megabytes per second (MBps) for large file processing (LFP), large database query (LDQ) and video on demand delivery (VOD) for a given price and protection level.

What is the best benchmark?
Simple, your own application in as close to as actual workload activity as possible. If that is not possible, then some simulation or workload simulation that closets resembles your needs.

Does this mean that XIV is still relevant?
Yes

Does this mean that XIV G3 should be used for every environment?
Generally speaking no. However its performance enhancements should allow it to be considered for more applications than in the past. Plus with the public comparisons now available, that should help to silence questions (including those from me) about what the systems can really do vs. marketing claims.

How does XIV compare to some other IBM storage systems using SPC2 comparisons?

System
SPC2 MBps
Cost per SPC2
Storage GBytes
Price tested
Discount
Protection
DS5300
5,634.17
$74.13
16,383
417,648
0%
R5
V7000
3,132.87
$71.32
29,914
$223,422
38-39%
R5
XIV G3
7,467.99
$152.34
154,619
1,137,641
63-64%
Mirror
DS8800
9,705.74
$270.38
71,537
2,624,257
40-50%
R5

In the above comparisons, the DS5300 (NetApp/Engenio based) is a dual controller (4GB of cache per controller) with 128 x 146.8GB 15K HDDs configured as RAID 5 with no discount applied to the price submitted. The V7000 system which is based on the IBM SVC along with other enhancements consists of dual controllers each with 8GB of cache and 120 x 10K 300GB HDDs configured as RAID 5 with just under a 40% discount off list price for system tested. For the XIV Gen3 system tested, discount off list price for the submission is about 63% with 15 nodes and a total of 360GB of cache and 180 2TB 7.2K SAS HDDs configured as mirrors. The DS8800 system with dual controllers has a 256GB of cache, 768 x 146GB 15K HDDs configured in RAID5 with a discount between 40 to 50% off of list.

What the various metrics do not show is the benefit of various features and functionality which should be considered to your particular needs. Likewise, if your applications are not centered around bandwidth or throughput, then the above performance comparisons would not be relevant. Also note that the systems above have various discount prices as submitted which can be a hint to a smart shopper where to begin negotiations at. You can also do some analysis of the various systems based on their performance, configuration, physical footprint, functionality and cost plus the links below take you to the complete reports with more information.

DS8800 SPC2 executive summary and full disclosure report

XIV SPC2 executive summary and full disclosure report

DS5300 SPC2 executive summary and full disclosure report

V7000 SPC2 executive summary and full disclosure report

Bottom line, benchmarks and performance comparisons are just that, a comparison that may or may not be relevant to your particular needs. Consequently they should be used as a tool combined with other information to see how a particular solution might be a fit for your specific needs. The best benchmark however is your own application running as close to possible realistic workload to get a representative perspective of a systems capabilities.

Ok, nuff said
Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Cloud and Virtual Data Storage Networking book released

Ok, it’s now official, following its debut at the VMworld 2011 book store last week in Las Vegas, my new book Cloud and Virtual Data Storage Networking (CRC Press) is now formally released with general availability announced today along with companion material located at https://storageioblog.com/book3 including the Cloud and Virtual Data Storage Networking LinkedIn group page launched a few months ago. Cloud and Virtual Data Storage Networking (CVDSN) a 370 page hard cover print is my third solo book that follows The Green and Virtual Data Center (CRC Press 2009) and Resilient Storage Networks (Elsevier 2004).

Cloud and Virtual Data Storage Networking Book by Greg Schulz
CVDSN book was on display at VMworld 2011 book store last week along with a new book by Duncan Epping (aka @DuncanYB ) and Frank Denneman (aka @frankdenneman ) titled VMware vSphere 5 Clustering Technical Deepdive. You can get your copy of Duncan and Franks new book on Amazon here.

Greg Schulz during book signing at VMworld 2011
Here is a photo of me on the left visiting a VMworld 2011 attendee in the VMworld book store.

 

Whats inside the book, theme and topics covered

When it comes to clouds, virtualization, converged and dynamic infrastructures Dont be scared however do look before you leap to be be prepared including doing your homework.

What this means is that you should do your homework, prepare, learn, and get involved with proof of concepts (POCs) and training to build the momentum and success to continue an ongoing IT journey. Identify where clouds, virtualization and data storage networking technologies and techniques compliment and enable your journey to efficient, effective and productive optimized IT services delivery.

 

There is no such thing as a data or information recession: Do more with what you have

A common challenge in many organizations is exploding data growth along with associated management tasks and constraints, including budgets, staffing, time, physical facilities, floor space, and power and cooling. IT clouds and dynamic infrastructure environments enable flexible, efficient and optimized, cost-effective and productive services delivery. The amount of data being generated, processed, and stored continues to grow, a trend that does not appear to be changing in the future. Even during the recent economic crisis, there has been no slow down or information recession. Instead, the need to process, move, and store data has only increased, in fact both people and data are living longer. CVDSN presents options, technologies, best practices and strategies for enabling IT organizations looking to do more with what they have while supporting growth along with new services without compromising on cost or QoS delivery (see figure below).

Driving Return on Innovation the new ROI: Doing more, reducing costs while boosting productivity

 

Expanding focus from efficiency and optimization to effectiveness and productivity

A primary tenant of a cloud and virtualized environment is to support growing demand in a cost-effective manner  with increased agility without compromising QoS. By removing complexity and enabling agility, information services can be delivered in a timely manner to meet changing business needs.

 

There are many types of information services delivery model options

Various types of information services delivery modes should be combined to meet various needs and requirements. These complimentary service delivery options and descriptive terms include cloud, virtual and data storage network enabled environments. These include dynamic Infrastructure, Public & Private and Hybrid Cloud, abstracted, multi-tenant, capacity on demand, Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) among others.

Convergence combing different technology domains and skill sets

Components of a cloud and virtual environment include desktop, servers, and storage, networking, hardware, and software, services along with APIs and software stacks. This include virtual and physical desktops, data, voice and storage networks, LANs, SANs, MANs, WANs, faster blade and rack servers with more memory, SSD and high-capacity storage and associated virtualization tools and management software. True convergence combines leveraging technology and people, processes and best practices aligned to make the most of those resources to deliver cost-effective services delivery.

 

Best people, processes, practices and products (the four Ps)

Bringing all the various components together is the Ps (people skill sets, process, practices and products). This means leveraging and enhancing people skill sets and experience, process and procedures to optimize workflow for streamlined service orchestration, practices and policies to be more effectively reducing waste without causing new bottlenecks, and products such as racks, stacks, hardware, software, and managed or cloud services.

 

Service categories and catalogs, templates SLO and SLA alignment

Establishing service categories aligned to known service levels and costs enables resources to be aligned to applicable SLO and SLA requirements. Leveraging service templates and defined policies can enable automation and rapid provisioning of resources including self-service requests.

 

Navigating to effective IT services delivery: Metrics, measurements and E2E management

You cannot effectively manage what you do not know about; likewise, without situational awareness or navigation tools, you are flying blind. E2E (End to End) tools can provide monitoring and usage metrics for reporting and accounting, including enabling comparison with other environments. Metrics include customer service satisfaction, SLO and SLAs, QoS, performance, availability and costs to service delivered.

 

The importance of data protection for virtual, cloud and physical environments

Clouds and virtualization are important tools and technologies for protecting existing consolidated or converged as well as traditional environments. Likewise, virtual and cloud environments or data placed there also need to be protected. Now is the time to rethink and modernize your data protection strategy to be more effective, protecting, preserving and serving more data for longer periods of time with less complexity and cost.

 

Packing smart and effectively for your journey: Data footprint reduction (DFR)

Reducing your data footprint impact leveraging data footprint reduction (DFR) techniques, technologies and best practices is important for enabling an optimized, efficient and effective IT services delivery environment. Reducing your data footprint is enabled with clouds and virtualization providing a means and mechanism for archiving inactive data and for transparently moving it. On the other hand, moving to a cloud and virtualized environment to do more with what you have is enhanced by reducing the impact of your data footprint. The ABCDs of data footprint reduction include Archiving, Backup modernization, Compression and consolidation, Data management and dedupe along with Storage tiering and thin provisioning among other techniques.

Cloud and Virtual Data Storage Networking book by Greg Schulz

How the book is laid out:

  • Table of content (TOC)
  • How the book is organized and who should read it
  • Preface
  • Section I: Why the need for cloud, virtualization and data storage networks
  • Chapter 1: Industry trends and perspectives: From issues and challenges to opportunities
  • Chapter 2: Cloud, virtualization and data storage networking fundamentals
  • Section II: Managing data and resources: Protect, preserve, secure and serve
  • Chapter 3: Infrastructure Resource Management (IRM)
  • Chapter 4: Data and storage networking security
  • Chapter 5: Data protection (Backup/Restore, BC and DR)
  • Chapter 6: Metrics and measurement for situational awareness
  • Section III: Technology, tools and solution options
  • Chapter 7: Data footprint reduction: Enabling cost-effective data demand growth
  • Chapter 8: Enabling data footprint reduction: Storage capacity optimization
  • Chapter 9: Storage services and systems
  • Chapter 10: Server virtualization
  • Chapter 11: Connectivity: Networking with your servers and storage
  • Chapter 12: Cloud and solution packages
  • Chapter 13: Management and tools
  • Section IV: Putting IT all together
  • Chapter 14: Applying what you have learned
  • Chapter 15: Wrap-up, what’s next and book summary
  • Appendices:
  • Where to Learn More
  • Index and Glossary

Here is the release that went out via Business Wire (aka Bizwire) earlier today.

 

Industry Veteran Greg Schulz of StorageIO Reveals Latest IT Strategies in “Cloud and Virtual Data Storage Networking” Book
StorageIO Founder Launches the Definitive Book for Enabling Cloud, Virtualized, Dynamic, and Converged Infrastructures

Stillwater, Minnesota – September 7, 2011  – The Server and StorageIO Group (www.storageio.com), a leading independent IT industry advisory and consultancy firm, in conjunction with  publisher CRC Press, a Taylor and Francis imprint, today announced the release of “Cloud and Virtual Data Storage Networking,” a new book by Greg Schulz, noted author and StorageIO founder. The book examines strategies for the design, implementation, and management of hardware, software, and services technologies that enable the most advanced, dynamic, and flexible cloud and virtual environments.

Cloud and Virtual Data Storage Networking

The book supplies real-world perspectives, tips, recommendations, figures, and diagrams on creating an efficient, flexible and optimized IT service delivery infrastructures to support demand without compromising quality of service (QoS) in a cost-effective manner. “Cloud and Virtual Data Storage Networking” looks at converging IT resources and management technologies to facilitate efficient and effective delivery of information services, including enabling information factories. Schulz guides readers of all experience levels through various technologies and techniques available to them for enabling efficient information services.

Topics covered in the book include:

  • Information services model options and best practices
  • Metrics for efficient E2E IT management and measurement
  • Server, storage, I/O networking, and data center virtualization
  • Converged and cloud storage services (IaaS, PaaS, SaaS)
  • Public, private, and hybrid cloud and managed services
  • Data protection for virtual, cloud, and physical environments
  • Data footprint reduction (archive, backup modernization, compression, dedupe)
  • High availability, business continuance (BC), and disaster recovery (DR)
  • Performance, availability and capacity optimization

This book explains when, where, with what, and how to leverage cloud, virtual, and data storage networking as part of an IT infrastructure today and in the future. “Cloud and Virtual Data Storage Networking” comprehensively covers IT data storage networking infrastructures, including public, private and hybrid cloud, managed services, virtualization, and traditional IT environments.

“With all the chatter in the market about cloud storage and how it can solve all your problems, the industry needed a clear breakdown of the facts and how to use cloud storage effectively. Greg’s latest book does exactly that,” said Greg Brunton of EDS, an HP company.

Click here to listen and watch Schulz discuss his new book in this Video about Cloud and Virtual Data Storage Networking book by Greg Schulz video.

About the Book

Cloud and Virtual Data Storage Networking has 370 pages, with more than 100 figures and tables, 15 chapters plus appendices, as well as a glossary. CRC Press catalog number K12375, ISBN-10: 1439851735, ISBN-13: 9781439851739, publication September 2011. The hard cover book can be purchased now at global venues including Amazon, Barnes and Noble, Digital Guru and CRCPress.com. Companion material is located at https://storageioblog.com/book3 including images, additional information, supporting site links at CRC Press, LinkedIn Cloud and Virtual Data Storage Networking group, and other books by the author. Direct book editorial review inquiries to John Wyzalek of CRC Press at john.wyzalek@taylorfrancis.com (twitter @jwyzalek) or +1 (917) 351-7149. For bulk and special orders contact Chris Manion of CRC Press at chris.manion@taylorandfrancis.com or +1 (561) 998-2508. For custom, derivative works and excerpts, contact StorageIO at info@storageio.com.

About the Author

Greg Schulz is the founder of the independent IT industry advisory firm StorageIO. Before forming StorageIO, Schulz worked for several vendors in systems engineering, sales, and marketing technologist roles. In addition to having been an analyst, vendor and VAR, Schulz also gained real-world hands on experience working in IT organizations across different industry sectors. His IT customer experience spans systems development, systems administrator, disaster recovery consultant, and capacity planner across different technology domains, including servers, storage, I/O networking hardware, software and services. Today, in addition to his analyst and research duties, Schulz is a prolific writer, blogger, and sought-after speaker, sharing his expertise with worldwide technology manufacturers and resellers, IT users, and members of the media. With an insightful and thought-provoking style, Schulz is also author of the books “The Green and Virtual Data Center” (CRC Press, 2009) which is on the Intel developers recommended reading list and the SNIA-endorsed reading book “Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures” (Elsevier, 2004). Schulz is available for interviews and commentary, briefings, speaking engagements at conferences and private events, webinars, video and podcast along with custom advisory consultation sessions. Learn more at https://storageio.com.

End of press release.

Wrap up

I want to express thanks to all of those involved with the project that spanned over the past year.

Stayed tuned for more news and updates pertaining to Cloud and Virtual Data Storage Networking along with related material including upcoming events as well as chapter excerpts. Speaking of events, here is information on an upcoming workshop seminar that I will be involved with for IT storage and networking professionals to be held October 4th and 5th in the Netherlands.

You can get your copy now at global venues including Amazon, Barnes and Noble, Digital Guru and CRCPress.com.

Ok, nuff said, for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

StorageIO going Dutch again: October 2011 Seminar for storage professionals

Greg Schulz of StorageIO in conjunction with or dutch partner Brouwer Storage Consultancy will be presenting a two day workshop seminar for IT storage, virtualization, and networking professionals Monday 3rd and Tuesday 4th of October 2011 at Ampt van Nijkerk Netherlands.

Brouwer Storage ConsultanceyThe Server and StorageIO Group

This two day interactive education seminar for storage professionals will focus on current data and storage networking trends, technology and business challenges along with available technologies and solutions. During the seminar learn what technologies and management techniques are available, how different vendors solutions compare and what to use when and where. This seminar digs into the various IT tools, techniques, technologies and best practices for enabling an efficient, effective, flexible, scalable and resilient data infrastructure.

The format of this two seminar will be a mix of presentation and interactive discussion allowing attendees plenty of time to discuss among themselves and with seminar presenters. Attendees will gain insight into how to compare and contrast various technologies and solutions in addition to identifying and aligning those solutions to their specific issues, challenges and requirements.

Major themes that will be discussed include:

  • Who is doing what with various storage solutions and tools
  • Is RAID still relevant for today and tomorrow
  • Are hard disk drives and tape finally dead at the hands of SSD and clouds
  • What am I routinely hearing, seeing or being asked to comment on
  • Enabling storage optimization, efficiency and effectiveness (performance and capacity)
  • Opportunities for leveraging various technologies, techniques,trends
  • Supporting virtual servers including re-architecting data protection
  • How to modernize data protection (backup/restore, BC, DR, replication, snapshots)
  • Data footprint reduction (DFR) including archive, compression and dedupe
  • Clarifying cloud confusion, don’t be scared, however look before you leap
  • Big data, big bandwidth and virtual desktop infrastructures (VDI)

In addition this two day seminar will look at what are some new and improved technologies and techniques, who is doing what along with discussions around industry and vendor activity including mergers and acquisitions. In addition to seminar handout materials, attendees will also receive a copy Cloud and Virtual Data Storage Networking (CRC Press) by Greg Schulz that looks at enabling efficient, optimized and effective information services delivery across cloud, virtual and traditional environments.

Cloud and Virtual Data Storage Networking Book

Buzzwords and topic themes to be discussed among others include E2E, FCoE and DCB, CNAs, SAS, I/O virtualization, server and storage virtualization, public and private cloud, Dynamic Infrastructures, VDI, RAID and advanced data protection options, SSD, flash, SAN, DAS and NAS, object storage, big data and big bandwidth, backup, BC, DR, application optimized or aware storage, open storage, scale out storage solutions, federated management, metrics and measurements, performance and capacity, data movement and migration, storage tiering, data protection modernization, SRA and SRM, data footprint reduction (archive, compress, dedupe), unified and multi-protocol storage, solution bundle and stacks.

For more information or to register contact Brouwer Storage Consultancy

Brouwer Storage Consultancy
Olevoortseweg 43
3861 MH Nijkerk
The Netherlands
Telephone: +31-33-246-6825
Cell: +31-652-601-309
Fax: +31-33-245-8956
Email: info@brouwerconsultancy.com
Web: www.brouwerconsultancy.com

Brouwer Storage Consultancey

Learn about other events involving Greg Schulz and StorageIO at www.storageio.com/events

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Poll: Networking Convergence, Ethernet, InfiniBand or both?

I just received an email in my inbox from Voltaire along with a pile of other advertisements, advisories, alerts and announcements from other folks.

What caught my eye on the email was that it is announcing a new survey results that you can read here as well as below.

The question that this survey announcements prompts for me and hence why I am posting it here is how dominant will InfiniBand be on a go forward basis, the answer I think is it depends…

It depends on the target market or audience, what their applications and technology preferences are along with other service requirements.

I think that there is and will remain a place for Infiniband, the question is where and for what types of environments as well as why have both InfiniBand and Ethernet including Fibre Channel over Ethernet (FCoE) in support of unified or converged I/O and data networking.

So here is the note that I received from Voltaire:

 

Hello,

A new survey by Voltaire (NASDAQ: VOLT) reveals that IT executives plan to use InfiniBand and Ethernet technologies together as they refresh or build new data centers. They’re choosing a converged network strategy to improve fabric performance which in turn furthers their infrastructure consolidation and efficiency objectives.

The full press release is below.  Please contact me if you would like to speak with a Voltaire executive for further commentary.

Regards,
Christy

____________________________________________________________
Christy Lynch| 978.439.5407(o) |617.794.1362(m)
Director, Corporate Communications
Voltaire – The Leader in Scale-Out Data Center Fabrics
christyl@voltaire.com | www.voltaire.com
Follow us on Twitter: www.twitter.com/voltaireltd

FOR IMMEDIATE RELEASE:

IT Survey Finds Executives Planning Converged Network Strategy:
Using Both InfiniBand and Ethernet

Fabric Performance Key to Making Data Centers Operate More Efficiently

CHELMSFORD, Mass. and ANANA, Israel January 12, 2010 – A new survey by Voltaire (NASDAQ: VOLT) reveals that IT executives plan to use InfiniBand and Ethernet technologies together as they refresh or build new data centers. They’re choosing a converged network strategy to improve fabric performance which in turn furthers their infrastructure consolidation and efficiency objectives.

Voltaire queried more than 120 members of the Global CIO & Executive IT Group, which includes CIOs, senior IT executives, and others in the field that attended the 2009 MIT Sloan CIO Symposium. The survey explored their data center networking needs, their choice of interconnect technologies (fabrics) for the enterprise, and criteria for making technology purchasing decisions.

“Increasingly, InfiniBand and Ethernet share the ability to address key networking requirements of virtualized, scale-out data centers, such as performance, efficiency, and scalability,” noted Asaf Somekh, vice president of marketing, Voltaire. “By adopting a converged network strategy, IT executives can build on their pre-existing investments, and leverage the best of both technologies.”

When asked about their fabric choices, 45 percent of the respondents said they planned to implement both InfiniBand with Ethernet as they made future data center enhancements. Another 54 percent intended to rely on Ethernet alone.

Among additional survey results:

  • When asked to rank the most important characteristics for their data center fabric, the largest number (31 percent) cited high bandwidth. Twenty-two percent cited low latency, and 17 percent said scalability.
  • When asked about their top data center networking priorities for the next two years, 34 percent again cited performance. Twenty-seven percent mentioned reducing costs, and 16 percent cited improving service levels.
  • A majority (nearly 60 percent) favored a fabric/network that is supported or backed by a global server manufacturer.

InfiniBand and Ethernet interconnect technologies are widely used in today’s data centers to speed up and make the most of computing applications, and to enable faster sharing of data among storage and server networks. Voltaire’s server and storage fabric switches leverage both technologies for optimum efficiency. The company provides InfiniBand products used in supercomputers, high-performance computing, and enterprise environments, as well as its Ethernet products to help a broad array of enterprise data centers meet their performance requirements and consolidation plans.

About Voltaire
Voltaire (NASDAQ: VOLT) is a leading provider of scale-out computing fabrics for data centers, high performance computing and cloud environments. Voltaire’s family of server and storage fabric switches and advanced management software improve performance of mission-critical applications, increase efficiency and reduce costs through infrastructure consolidation and lower power consumption. Used by more than 30 percent of the Fortune 100 and other premier organizations across many industries, including many of the TOP500 supercomputers, Voltaire products are included in server and blade offerings from Bull, HP, IBM, NEC and Sun. Founded in 1997, Voltaire is headquartered in Ra’anana, Israel and Chelmsford, Massachusetts. More information is available at www.voltaire.com or by calling 1-800-865-8247.

Forward Looking Statements
Information provided in this press release may contain statements relating to current expectations, estimates, forecasts and projections about future events that are "forward-looking statements" as defined in the Private Securities Litigation Reform Act of 1995. These forward-looking statements generally relate to Voltaire’s plans, objectives and expectations for future operations and are based upon management’s current estimates and projections of future results or trends. They also include third-party projections regarding expected industry growth rates. Actual future results may differ materially from those projected as a result of certain risks and uncertainties. These factors include, but are not limited to, those discussed under the heading "Risk Factors" in Voltaire’s annual report on Form 20-F for the year ended December 31, 2008. These forward-looking statements are made only as of the date hereof, and we undertake no obligation to update or revise the forward-looking statements, whether as a result of new information, future events or otherwise.

###

All product and company names mentioned herein may be the trademarks of their respective owners.

 

End of Voltaire transmission:

I/O, storage and networking interface wars come and go similar to other technology debates of what is the best or that will be supreme.

Some recent debates have been around Fibre Channel vs. iSCSI or iSCSI vs. Fibre Channel (depends on your perspective), SAN vs. NAS, NAS vs. SAS, SAS vs. iSCSI or Fibre Channel, Fibre Channel vs. Fibre Channel over Ethernet (FCoE) vs. iSCSI vs. InfiniBand, xWDM vs. SONET or MPLS, IP vs UDP or other IP based services, not to mention the whole LAN, SAN, MAN, WAN POTS and PAN speed games of 1G, 2G, 4G, 8G, 10G, 40G or 100G. Of course there are also the I/O virtualization (IOV) discussions including PCIe Single Root (SR) and Multi Root (MR) for attachment of SAS/SATA, Ethernet, Fibre Channel or other adapters vs. other approaches.

Thus when I routinely get asked about what is the best, my answer usually is a qualified it depends based on what you are doing, trying to accomplish, your environment, preferences among others. In other words, Im not hung up or tied to anyone particular networking transport, protocol, network or interface, rather, the ones that work and are most applicable to the task at hand

Now getting back to Voltaire and InfiniBand which I think has a future for some environments, however I dont see it being the be all end all it was once promoted to be. And outside of the InfiniBand faithful (there are also iSCSI, SAS, Fibre Channel, FCoE, CEE and DCE among other devotees), I suspect that the results would be mixed.

I suspect that the Voltaire survey reflects that as well as if I surveyed an Ethernet dominate environment I can take a pretty good guess at the results, likewise for a Fibre Channel, or FCoE influenced environment. Not to mention the composition of the environment, focus and business or applications being supported. One would also expect a slightly different survey results from the likes of Aprius, Broadcom, Brocade, Cisco, Emulex, Mellanox (they also are involved with InfiniBand), NextIO, Qlogic (they actually do some Infiniband activity as well), Virtensys or Xsigo (actually, they support convergence of Fibre Channel and Ethernet via Infiniband) among others.

Ok, so what is your take?

Whats your preffered network interface for convergence?

For additional reading, here are some related links:

  • I/O Virtualization (IOV) Revisited
  • I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
  • Buzzword Bingo 1.0 – Are you ready for fall product announcements?
  • StorageIO in the News Update V2010.1
  • The Green and Virtual Data Center (Chapter 9)
  • Also check out what others including Scott Lowe have to say about IOV here or, Stuart Miniman about FCoE here, or of Greg Ferro here.
  • Oh, and for what its worth for those concerned about FTC disclosure, Voltaire is not nor have they been a client of StorageIO, however, I did used to work for a Fibre Channel, iSCSI, IP storage, LAN, SAN, MAN, WAN vendor and wrote a book on the topics :).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Links to Upcoming and Recent Webcasts and Videocasts

    Here are links to several recent and upcoming Webcast and video casts covering a wide range of topics. Some of these free Webcast and video casts may require registration.

    Industry Trends & Perspectives – Data Protection for Virtual Server Environments

    Next Generation Data Centers Today: What’s New with Storage and Networking

    Hot Storage Trends for 2008

    Expanding your Channel Business with Performance and Capacity Planning

    Top Ten I/O Strategies for the Green and Virtual Data Center

    Cheers
    Greg Schulz – StorageIO

    Just When You Thought It Was Safe To Go In The Water Again!

    In the shark infested waters where I/O and networking debates often rage, the Fibre Channel vs. iSCSI, or, is that iSCSI vs. Fibre Channel debates continue which is about as surprising as an ice berg melting because it floated into warmer water or hot air in the tropics.

    Here’s a link to an article at Processor.com by Kurt Marko “iSCSI vs. Fibre Channel: A Cost Comparison iSCSI Targets the Low-End SAN, But Are The Cost Advantages Worth The Performance Trade-offs?” that looks at a recent iSCSI justification report and some additional commentary about apples to oranges comparisons by me.

    Here’s the thing, no one in their right mind would try to refute that iSCSI at 1GbE levering built-in server NICs and standard Ethernet switches and operating system supplied path managers is cheaper than say 4Gb Fibre Channel or even legacy 1Gb and 2Gb Fibre Channel. However that’s hardly an apple to apples comparison.

    A more interesting comparison is for example 10GbE iSCSI compared to 1GbE iSCSI (again not a fair comparison), or, look at for example the new solution from HP and Qlogic that for about $8,200 USD, you get a 8Gb FC switch with a bunch of ports for expansion, four (4) PCIe 8Gb FC adapters plus cables plus transceiver optics which while not as cheap as 1GbE ports built into a server or an off the shelf Ethernet switch, is a far cry from the usual apples to oranges no cost Ethernet NICs vs. $1,500 FC adapters and high price FC director ports.

    To be fair, put this into comparison with 10GbE adapters (and probably not a real apples to apples comparison at that) which on CDW go from about $600 USD (without no transceivers) to $1,100 to $1,500 for single port with transceivers or about $2,500 to $3,000 or more for dual or multi-port.

    So the usual counter argument to trying to make a more apples to apples comparison is that iSCSI deployments do not need the performance of 10GbE or 8GbE Fibre Channel which is very valid, however then a comparison should be iSCSI vs. NAS.

    Here’s the bottom line, I like iSCSI for its target markets and see lots of huge upside and growth opportunity just like I see a continued place for Fibre Channel and moving forward FCoE leveraging Ethernet as the common denominator (at least for now) as well as NAS for data sharing and SAS for small deployments requiring shared storage (assuming a shared SAS array that is).

    I?m a fan of using the right technology or tool for the task at hand and if that gets me in trouble with the iSCSI purist who wants everything on iSCSI, well, too bad, so be it. Likewise, if the FC police are not happy that I?m not ready and willing to squash out the evil iSCSI, well, too bad, get over it, same with NAS, InfiniBand and SAS and that’s not to mean I don?t take a side or preference, rather, applied to the right task at hand, I?m a huge fan of these and other technologies and hence the discussion about apples to apples comparisons and applicability.

    Cheers
    GS