Cisco buys Whiptail continuing the SSD storage I/O flash cash cache dash

Storage I/O trends

Cisco buys Whiptail continuing the Storage storage I/O flash cash cache dash

Congratulations to Virident for being bought today for $645 Million USD by Western Digital (WD). Virident a nand flash PCIe card startup vendor has been around for several years and in the last year or two has gain more industry awareness as a competitor to FusionIO among others.

There is a nand flash solid state devices (SSD) cash-dash occurring, not to mention fast cache dances that is occurring the IT and data infrastructure (e.g. storage and IO) sector specifically.

Why the nand flash SSD cash dash and cache dance?

Yesterday hard disk drive (HDD) vendor Western Digital (WD) bought Virident a nand flash PCIe Solid State Device (SSD) card vendor for $650M, and today networking and server vendor Cisco bought Whiptail a SSD based storage system startup for a little over $400M. Here is an industry trends perspective post that I did yesterday on WD and Virident.

Obviously this begs a couple of questions, some of which I raised in my post yesterday about WD, Virident, Seagate, FusionIO and others.

Questions include

Does this mean Cisco is getting ready to take on EMC, NetApp, HDS and its other storage partners who leverage the Cisco UCS server?

IMHO at least near term no more than they have in the past, nor any more than EMCs partnership with Lenovo indicates a shift in what is done with vBlocks. On the other hand, some partners or customers may be as nervous as a long-tailed cat next to a rocking chair (Google it if you don’t know what it means ;).

Is Cisco going to continue to offer Whiptail SSD storage solutions on a standalone basis, or pull them in as part of solutions similar to what it has done on other acquisitions?

Storage I/O trends

IMHO this is one of the most fundamental questions and despite the press release and statements about this being a UCS focus, a clear sign of proof for Cisco is how they reign in (if they go that route) Whiptail from being sold as a general storage solution (with SSD) as opposed to being part of a solution bundle.

How will Cisco manage its relationship in a coopitition manner cooperating with the likes of EMC in the joint VCE initiative along with FlexPod partner NetApp among others? Again time will tell.

Also while most of the discussions about NetApp have been around the UCS based FlexPod business, there is the other side of the discussion which is what about NetApp E Series storage including the SSD based EF540 that competes with Whiptail (among others).

Many people may not realize how much DAS storage including fast SAS, high-capacity SAS and SATA or PCIe SSD cards Cisco sells as part of UCS solutions that are not vBlock, FlexPod or other partner systems.

NetApp and Cisco have partnerships that go beyond the FlexPod (UCS and ONTAP based FAS) so will be interesting to see what happens in that space (if anything). This is where Cisco and their UCS acquiring Whiptail is not that different from IBM buying TMS to complement their servers (and storage) while also partnering with other suppliers, same holds true for server vendors Dell, HP, IBM and Oracle among others.

Can Cisco articulate and convince their partners, customers, prospects and others that the whiptail acquisition is more about direct attached storage
(DAS) which includes both internal dedicated and external shared device?

Keep in mind that DAS does not have to mean Dumb A$$ Storage as some might have you believe.

Then there are the more popular questions of who is going to get bought next, what will NetApp, Dell, Seagate, Huawei and a few others do?

Oh, btw, funny how have not seen any of the pubs mention that Whiptail CEO Dan Crain is a former Brocadian (e.g. former Brocade CTO) who happens to be a Cisco competitor, just saying.

Congratulations to Dan and his crew and enjoy life at Cisco.

Stay tuned as the fall 2013 nand flash SSD cache dash and cash dance activities are well underway.

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

WD buys nand flash SSD storage I/O cache vendor Virident

Storage I/O trends

WD buys nand flash SSD storage I/O cache vendor Virident

Congratulations to Virident for being bought today for $645 Million USD by Western Digital (WD). Virident a nand flash PCIe card startup vendor has been around for several years and in the last year or two has gain more industry awareness as a competitor to FusionIO among others.

There is a nand flash solid state devices (SSD) cash dash occurring, not to mention fast cache dances that is occurring the IT and data infrastructure (e.g. storage and IO) sector specifically.

Why the nand flash SSD cash dash and cache dance?

Here is a piece that I did today over at InfoStor on a related theme that sets the basis of why the nand flash-based SSD market is popular for storage and as a cache. Hence there is a flash cash dash and by some dance for increased storage I/O performance.

Like the hard disk drive (HDD) industry before it which despite what some pundits and profits have declared (for years if not decades) as being dead (it is still alive), there were many startups, shutdowns, mergers and acquisitions along with some transformations. Granted solid-state memories is part of the presence and future being deployed in new and different ways.

The same thing has occurred in the nand flash-based SSD sector with LSI acquiring SANDforce, SANdisk picking up Pliant and Flashsoft among others. Then there is Western Digital (WD) that recently has danced with their cash as they dash to buy up all things flash including Stec (drives & PCIe cards), Velobit (cache software), Virident (PCIe cards), along with Arkeia (backup) and an investment in Skyera.

Storage I/O trends

What about industry trends and market dynamics?

Meanwhile there have been some other changes with former industry darling and highflying stock post IPO FusionIO hitting as market reality and sudden CEO departure a few months ago. However after a few months of their stock being pummeled, today it bounced back perhaps as people now speculate who will buy FusionIO with WD picking up Virident. Note that one of Viridents OEM customers is EMC for their PCIe flash card XtremSF as are Micron and LSI.

Meanwhile Stec, also  now own by WD was also EMCs original flash SSD drive supplier or what they refer to as a EFDs (Electronic Flash Devices), not to mention having also supplied HDDs to them (also keep in mind WD bought HGST a year or so back).

There are some early signs as well as their stock price jumping today which was probably oversold. Perhaps people are now speculating that maybe Seagate who had been an investor in Virident which was bought by WD for $645 million today might be in the market for somebody else? Alternatively, that perhaps WD didn’t see the value in a FusionIO, or willing to make big flash cache cash grabs dash of that size? Also note Seagate won a $630 million (and the next appeal was recently upheld) infringement lawsuit vs. WD (here and here).

Does that mean FusionIO could become Seagate’s target or that of NetApp, Oracle or somebody else with the cash and willingness to dash, grab a chunk of the nand flash, and cache market?

Likewise, there are the software I/O and caching tool vendors some of which are tied to VMware and virtual servers vs. others that are more flexible that are gaining popularity. What about the systems or solution appliances play, could that be in the hunt for a Seagate?

Anything is possible however IMHO that would be a risky move, one that many at Seagate probably still remember from their experiment with Xiotech, not to mention stepping on the toes of their major OEM customer partners.

Storage I/O trends

Thus I would expect Seagate if they do anything would be more along the lines of a component type suppler meaning a FusionIO (yes they have Nexgen, however that could be easily dealt with), OCZ, perhaps even a LSI or Micron however some of those start to get rather expensive for a quick flash cache grab for some stock and cash.

Also, keep in mind that FusionIO in addition to having their PCIe flash cards also have the ioturbine software-caching tool that if you are not familiar with, IBM recently made an announcement of their Flash Cache Storage Accelerator (FCSA) that has an affiliation to guess who?

Closing comments (for now)

Some of the systems or solutions players will survive, perhaps even being acquired as XtremIO was by EMC, or file for IPO like Violin, or express their wish to IPO and or be bought such as all the others (e.g. Skyera, Whiptail, Pure, Solidfire, Cloudbyte, Nimbus, Nimble, Nutanix, Tegile, Kaminario, Greenbyte, and Simplivity among others).

Here’s the thing, those who really do know what is going to happen are not and probably cannot say, and those who are talking what will happen are like the rest of us, just speculating or providing perspectives or stirring the pot among other things.

So who will be next in the flash cache ssd cash dash dance?

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

IBM Server Side Storage I/O SSD Flash Cache Software

Storage I/O trends

IBM Server Side Storage I/O SSD Flash Cache Software

As I often say, the best server storage I/O or IOP is the one that you do not have to do. The second best storage I/O or IOP is the one with least impact or that can be done in a cost-effective way. Likewise the question is not if solid-state device (SSD) including nand flash are in your future, rather when, where, why, with what, how much along with from whom. Also location matters when it comes to SSD including nand flash with different environments and applications leveraging different placement (locality) options, not to mention how much performance do you need vs. want?

As part of their $1 billion USD (to be spent over three years, or $333.3333 million per year) flash ahead initiative IBM has announced their Flash Cache Storage Accelerator (FCSA) server software. While IBM did not use the term, (congratulations and thank you btw) some creative marketer might want to try calling this Software Defined Cache (SDC) or Software Defined SSD (SDSSD) which if that occurs, apologies in advance ;). Keep in mind that it was about a year ago this time when IBM announced that they were acquiring SSD industry veteran Texas Memory Systems (TMS).

What was announced, introducing Flash Cache Storage Acceleration or FCSA

With this announcement of FCSA slated for customer general availability by end of August, IBM joins EMC and NetApp among other storage systems vendors who developed their own, or have collaborated on server-side IO optimization and cache software. Some of the other startup and established vendors who have IO optimization, performance acceleration and caching software include DataRam (Ramdisk), FusionIO, Infinio (NFS for VMware), Pernix (block for VMware), Proximal and SANdisk (bought flashsoft) among others.

Read more about IBM Flash Cache Software (FCSA) including various questions and perspectives in part two of this two-part post located here.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II: IBM Server Side Storage I/O SSD Flash Cache Software

Storage I/O trends

Part II IBM Server Flash Cache Storage I/O accelerator for SSD

This is the second in a two-part post series on IBM’s Flash Cache Storage Accelerator (FCSA) for Solid State Device (SSD) storage announced today. You can view part I of the IBM FCSA announcement synopsis here.

Some FCSA ssd cache questions and perspectives

What is FCSA?
FCSA is a server-side storage I/O or IOP caching software tool that makes use of local (server-side) nand flash SSD (PCIe cards or drives). As a cache tool (view IBM flash site here) FCSA provides persistent read caching on IBM servers (xSeries, Flex and Blade x86 based systems) with write through cache (e.g. data cached for later reads) while write data is written directly to block attached storage including SANs. back-end storage can be iSCSI, SAS, FC or FCoE based block systems from IBM or others including all SSD, hybrid SSD or traditional HDD based solutions from IBM and others.

How is this different from just using a dedicated PCIe nand flash SSD card?
FCSA complements those by using them as a persistent storage to cache storage I/O reads to boost performance. By using the PCIe nand flash card or SSD drives, FCSA and other storage I/O cache optimization tools free up valuable server-side DRAM from having to be used as a read cache on the servers. On the other hand, caching tools such as FCSA also keep local cached reads closer to the applications on the servers (e.g. locality of reference) reducing the impact on backed shared block storage systems.

What is FCSA for?
With storage I/O or IOPS and application performance in general, location matters due to locality of reference hence the need for using different approaches for various environments. IBM FCSA is a storage I/O caching software technology that reduces the impact of applications having to do random read operations. In addition to caching reads, FCSA also has a write-through cache, which means that while data written to back-end block storage including on iSCSI, SAS, FC or FCoE based storage (IBM or other vendors), a copy of the data is cached for later reads. Thus while the best storage I/O is the one that does not have to be done (e.g. can be resolved from cache), the second best would be writes that go to a storage system that are not competing with read requests (handled via cache).

Storage I/O trends

Who else is doing this?
This is similar to what EMC initially announced and released in February 2012 with VFcache now renamed to be XtremSW along with other caching and IO optimization software from others (e.g. SANdisk, Proximal and Pernix among others.

Does this replace IBM EasyTier?
Simple answer is no, one is for tiering (e.g. EasyTier), the other is for IO caching and optimization (e.g. FCSA).

Does this replace or compete with other IBM SSD technologies?
With anything, it is possible to find a way to make or view it as competitive. However in general FCSA complements other IBM storage I/O optimization and management software tools such as EasyTier as well as leverage and coexist with their various SSD products (from PCIe cards to drives to drive shelves to all SSD and hybrid SSD solutions).

How does FCSA work?
The FCSA software works in either a physical machine (PM) bare metal mode with Microsoft Windows operating systems (OS) such as Server 2008, 2012 among others. There is also *nix support for RedHat Linux, along with in a VMware virtual machine (VM) environment. In a VMware environment High Availability (HA), DRS and VMotion services and capabilities are supported. Hopefully it will be sooner vs. later that we hear IBM do a follow-up announcement (pure speculation and wishful thinking) on more hypervisors (e.g. Hyper-V, Xen, KVM) support along with Centos, Ubuntu or Power based systems including IBM pSeries. Read more about IBM Pure and Flex systems here.

What about server CPU and DRAM overhead?
As should be expected, a minimal amount of server DRAM (e.g. main memory) and CPU processing cycles are used to support the FCSA software and its drivers. Note the reason I say as should be expected is how you can have software running on a server doing any type of work that does not need some amount of DRAM and processing cycles. Granted some vendors will try to spin and say that there is no server-side DRAM or CPU consumed which would be true if they are completely external to the server (VM or PM). The important thing is to understand how much of an impact in terms of CPU along with DRAM consumed along with their corresponding effectiveness benefit that are derived.

Storage I/O trends

Does FCSA work with NAS (NFS or CIFS) back-end storage?
No this is a server-side block only cache solution. However having said that, if your applications or server are presenting shared storage to others (e.g. out the front-end) as NAS (NFS, CIFS, HDFS) using block storage (back-end), then FCSA can cache the storage I/O going to those back-end block devices.

Is this an appliance?
Short and simple answer is no, however I would not be surprised to hear some creative software defined marketer try to spin it as a flash cache software appliance. What this means is that FCSA is simply IO and storage optimization software for caching to boost read performance for VM and PM servers.

What is this hardware or storage agnostic stuff mean?
Simple, it means that FCSA can work with various nand flash PCIe cards or flash SSD drives installed in servers, as well as with various back-end block storage including SAN from IBM or others. This includes being able to use block storage using iSCSI, SAS, FC or FCoE attached storage.

What is the difference between Easytier and FCSA?
Simple, FCSA is providing read acceleration via caching which in turn should offload some reads from affecting storage systems so that they can focus on handling writes or read ahead operations. Easytier on the other hand is for as its name implies tiering or movement of data in a more deterministic fashion.

How do you get FCSA?
It is software that you buy from IBM that runs on an IBM x86 based server. It is licensed on a per server basis including one-year service and support. IBM has also indicated that they have volume or multiple servers based licensing options.

Storage I/O trends

Does this mean IBM is competing with other software based IO optimization and cache tool vendors?
IBM is focusing on selling and adding value to their server solutions. Thus while you can buy the software from IBM for their servers (e.g. no bundling required), you cannot buy the software to run on your AMD/Seamicro, Cisco (including EMC/VCE and NetApp) , Dell, Fujitsu, HDS, HP, Lenovo, Oracle, SuperMicro among other vendors servers.

Will this work on non-IBM servers?
IBM is only supporting FCSA on IBM x86 based servers; however, you can buy the software without having to buy a solution bundle (e.g. servers or storage).

What is this Cooperative Caching stuff?
Cooperative caching takes the next step from simple read cache with write-through to also support chance coherency in a shared environment, as well as leverage tighter application or guest operating system and storage system integration. For example, applications can work with storage systems to make intelligent predictive informed decisions on what to pre-fetch or read ahead and cached, as well as enable cache warming on restart. Another example is where in a shared storage environment if one server makes a change to a shared LUN or volume that the local server-side caches are also updated to prevent stale or inconsistent reads from occurring.

Can FCSA use multiple nand flash SSD devices on the same server?
Yes, IBM FCSA supports use of multiple server-side PCIe and or drive based SSD devices.

How is cache coherency maintained including during a reboot?
While data stored in the nand flash SSD device is persistent, it’s up to the server and applications working with the storage systems to decide if there is coherent or stale data that needs to be refreshed. Likewise, since FCSA is server-side and back-end storage system or SAN agnostic, without cooperative caching it will not know if the underlying data for a storage volume changed without being notified from another server that modified it. Thus if using shared back-end including SAN storage, do your due diligence to make sure multi-host access to the same LUN’s or volumes is being coordinated with some server-side software to support cache coherency, something that would apply to all vendors.

Storage I/O trends

What about cache warming or reloading of the read cache?
Some vendors who have tightly interested caching software and storage systems, something IBM refers to as cooperative caching that can have the ability to re-warm the cache. With solutions that support cache re-warming, the cache software and storage systems work together to main cache coherency while pre-loading data from the underlying storage system based on hot bands or other profiles and experience. As of this announcement, FCSA does not support cache warming on its own.

Does IBM have service or tools to complement FCSA?
Yes, IBM has an assessment, profile and planning tool that are available on a free consultation services basis with a technician to check your environment. Of course, the next logical step would be for IBM to make the tool available via free download or on some other basis as well.

Do I recommend and have I tried FCSA?
On paper, or WebEx, YouTube or other venue FCSA looks interesting and capable, a good fit for some environments particular if IBM server-based. However since my PM and VMware VM based servers are from other vendors, along with the fact that FCSA only runs on IBM servers, have not actually given it a hands on test drive yet. Thus if you are looking at storage I/O optimization and caching software tools for your VM or PM environment, checkout IBM FCSA to see if it meets your needs.

Storage I/O trends

General comments

It is great to see server and storage systems vendors add value to their solutions with I/O and performance optimization as well as caching software tools. However, I am also concerned with the growing numbers of different software tools that only work with one vendor’s servers or storage systems, or at least are supported as such.

This reminds me of a time not all that long ago (ok, for some longer than others) when we had a proliferation of different host bus adapter (HBA) driver and pathing drivers from various vendors. The result is a hodge podge (a technical term) of software running on different operating systems, hypervisors, PM’s, VMs, and storage systems, all of which need to be managed. On the other hand, for the time being perhaps the benefit will outweigh the pain of having different tools. That is where there are options from server-side vendor centric, storage system focused, or third-party software tool providers.

Another consideration is that some tools work in VMware environments; others support multiple hypervisors while others also support bare metal servers or PMs. Which applies to your environment will of course depend. After all, if you are an all VMware environment given that many of the caching tools tend to be VMware focused, that gives more options vs. for those who are still predominately PM environments.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Viking SATADIMM: Nand flash SATA SSD in DDR3 DIMM slot?

Storage I/O trends

Today computer and data storage memory vendor Viking announced that SSD vendor Solidfire has deployed their SATADIMM modules in DDR3 DIMM (e.g. Random Access Memory (RAM) main memory) slots of their SF SSD based storage solution.

solidfire ssd storage with satadimm
Solidfire SD solution with SATADIMM via Viking

Nand flash SATA SSD in a DDR3 DIMM slot?

Per Viking, Solidfire uses the SATADIMM as boot devices and cache to complement the normal SSD drives used in their SF SSD storage grid or cluster. For those not familiar, Solidfire SF storage systems or appliances are based on industry standard servers that are populated with SSD devices which in turn are interconnected with other nodes (servers) to create a grid or cluster of SSD performance and space capacity. Thus as nodes are added, more performance, availability and capacity are also increased all of which are accessed via iSCSI. Learn more about Solidfire SD solutions on their website here.

Here is the press release that Viking put out today:

Viking Technology SATADIMM Increases SSD Capacity in SolidFire’s Storage System (Press Release)

Viking Technology’s SATADIMM enables higher total SSD capacity for SolidFire systems, offering cloud infrastructure providers an optimized and more powerful solution

FOOTHILL RANCH, Calif., August 12, 2013 – Viking Technology, an industry leading supplier of Solid State Drives (SSDs), Non-Volatile Dual In-line Memory Module (NVDIMMs), and DRAM, today announced that SolidFire has selected its SATADIMM SSD as both the cache SSD and boot volume SSD for their storage nodes. Viking Technology’s SATADIMM SSD enables SolidFire to offer enhanced products by increasing both the number and the total capacity of SSDs in their solution.

“The Viking SATADIMM gives us an additional SSD within the chassis allowing us to dedicate more drives towards storage capacity, while storing boot and metadata information securely inside the system,” says Adam Carter, Director of Product Management at SolidFire. “Viking’s SATADIMM technology is unique in the market and an important part of our hardware design.”

SATADIMM is an enterprise-class SSD in a Dual In-line Memory Module (DIMM) form factor that resides within any empty DDR3 DIMM socket. The drive enables SSD caching and boot capabilities without using a hard disk drive bay. The integration of Viking Technology’s SATADIMM not only boosts overall system performance but allows SolidFire to minimize potential human errors associated with data center management, such as accidentally removing a boot or cache drive when replacing an adjacent failed drive.

“We are excited to support SolidFire with an optimal solid state solution that delivers increased value to their customers compared to traditional SSDs,” says Adrian Proctor, VP of Marketing, Viking Technology. “SATADIMM is a solid state drive that takes advantage of existing empty DDR3 sockets and provides a valuable increase in both performance and capacity.”

SATADIMM is a 6Gb SATA SSD with capacities up to 512GB. A next generation SAS solution with capacities of 1TB & 2TB will be available early in 2014. For more information, visit our website www.vikingtechnology.com or email us at sales@vikingtechnology.com.

Sales information is available at: www.vikingtechnology.com, via email at sales@vikingtechnology.com or by calling (949) 643-7255.

About Viking Technology Viking Technology is recognized as a leader in NVDIMM technology. Supporting a broad range of memory solutions that bridge DRAM and SSD, Viking delivers solutions to OEMs in the enterprise, high-performance computing, industrial and the telecommunications markets. Viking Technology is a division of Sanmina Corporation (Nasdaq: SANM), a leading Electronics Manufacturing Services (EMS) provider. More information is available at www.vikingtechnology.com.

About SolidFire SolidFire is the market leader in high-performance data storage systems designed for large-scale public and private cloud infrastructure. Leveraging an all-flash scale-out architecture with patented volume-level quality of service (QoS) control, providers can now guarantee storage performance to thousands of applications within a shared infrastructure. In-line data reduction techniques along with system-wide automation are fueling new block-storage services and advancing the way the world uses the cloud.

What’s inside the press release

On the surface this might cause some to jump to the conclusion that the nand flash SSD is being accessed via the fast memory bus normally used for DRAM (e.g. main memory) of a server or storage system controller. For some this might even cause a jump to conclusion that Viking has figured out a way to use nand flash for reads and writes not only via a DDR3 DIMM memory location, as well as doing so with the Serial ATA (SATA) protocol enabling server boot and use by any operating system or hypervisors (e.g. VMware vSphere or ESXi, Microsoft Hyper-V, Xen or KVM among others).

Note for those not familiar or needing a refresh on DRAM, DIMM and related items, here is an excerpt from Chapter 7 (Servers – Physical, Virtual and Software) from my book "The Green and Virtual Data Center" (CRC Press).

7.2.2 Memory

Computers rely on some form of memory ranging from internal registers, local on-board processor Level 1 (L1) and Level 2 (L2) caches, random accessible memory (RAM), non-volatile RAM (NVRAM) or Flash along with external disk storage. Memory, which includes external disk storage, is used for storing operating system software along with associated tools or utilities, application programs and data. Read more of the excerpt here…

Is SATADIMM memory bus nand flash SSD storage?

In short no.

Some vendors or their surrogates might be tempted to spin such a story by masking some details to allow your imagination to run wild a bit. When I saw the press release announcement I reached out to Tinh Ngo (Director Marketing Communications) over at Viking with some questions. I was expecting the usual marketing spin story, dancing around the questions with long answers or simply not responding with anything of substance (or that requires some substance to believe). Again what I found was the opposite and thus want to share with you some of the types of questions and answers.

So what actually is SATADIMM? See for yourself in the following image (click on it to view or Viking site).

Via Viking website, click on image or here to learn more about SATADIMM

Does SATADIMM actually move data via DDR3 and memory bus? No, SATADIMM only draws power from it (yes nand flash does need power when in use contrary to a myth I was told about).

Wait, then how is data moved and how does it get to and through the SATA IO stack (hardware and software)?

Simple, there is a cable connector that attached to the SATADIMM that in turn attached to an internal SATA port. Or using a different connector cable attach the SATADIMM (up to four) to a standard SAS internal port such as on a main board, HBA, RAID or caching adapter.

industry trend

Does that mean that Viking and who ever uses SATADIMM is not actually moving data or implementing SATA via the memory bus and DDR3 DIMM sockets? That would be correct, data movement occurs via cable connection to standard SATA or SAS ports.

Wait, why would I give up a DDR3 DIMM socket in my server that could be used for more DRAM? Great question and one that should be it depends on if you need more DRAM or more nand flash? If you are out of drive slots or PCIe card slots and have enough DRAM for your needs along with available DDR3 slots, you can stuff more nand flash into those locations assuming you have SAS or SATA connectivity.

satadimm
SATADIMM with SATA connector top right via Viking

satadimm sata connector
SATADIMM SATA connector via Viking

satadimm sas connector
SATADIMM SAS (Internal) connector via Viking

Why not just use the onboard USB ports and plug-in some high-capacity USB thumb drives to cut cost? If that is your primary objective it would probably work and I can also think of some other ways to cut cost. However those are also probably not the primary tenants that people looking to deploy something like SATADIMM would be looking for.

What are the storage capacities that can be placed on the SATADIMM? They are available in different sizes up to 400GB for SLC and 480GB for MLC. Viking indicated that there are larger capacities and faster 12Gb SAS interfaces in the works which would be more of a surprise if there were not. Learn more about current product specifications here.

Good questions. Attached are three images that sort of illustrates the connector. As well, why not a USB drive; well, there are customers that put 12 of these in the system (with up to 480GB usable capacity) that equates to roughly an added 5.7TBs inside the box without touching the drive bays (left for mass HDD’s). You will then need to raid/connect) all the SATADIMM via a HBA.

How fast is the SATADIMM and does putting it into a DDR3 slot speed things up or slow them down? Viking has some basic performance information on their site (here). However generally should be the same or similar to reach a SAS or SATA SSD drive, although keep SSD metrics and performance in the proper context. Also keep in mind that the DDR3 DIMM slot is only being used for power and not real data movement.

Is the SATADIMM using 3Gbs or 6Gbs SATA? Good questions, today is 6Gb SATA (remember that SATA can attach to a SAS port however not vise versa). Lets see if Viking responds in the comments with more including RAID support (hardware or software) along with other insight such as UNMAP, TRIM, Advanced Format (AF) 4KByte blocks among other things.

Have I actually tried SATADIMM yet? No, not yet. However would like to give it a test drive and workout if one were to show up on my doorstep along with disclosure and share the results if applicable.

industry trend

Future of nand flash in DRAM DIMM sockets

Keep in mind that someday nand flash will actually seem not only in a Webex or Powerpoint demo preso (e.g. similar to what Diablo Technology is previewing), as well as in real use for example what Micron earlier this year predicted for flash on DDR4 (more DDR3 vs. DDR4 here).

Is SATADIMM the best nand flash SSD approach for every solution or environment? No, however it does give some interesting options for those who are PCIe card, or HDD and SSD drive slot constrained that also have available DDR3 DIMM sockets. As to price, check with Viking, wish I could say tell them Greg from StorageIO sent you for a good value, however not sure what they would say or do.

Related more reading:
How much storage performance do you want vs. need?
Can RAID extend the life of nand flash SSD?
Can we get a side of context with them IOPS and other storage metrics?
SSD & Real Estate: Location, Location, Location
What is the best kind of IO? The one you do not have to do
SSD, flash and DRAM, DejaVu or something new?

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Dave Demming talking tech education from SNW Spring 2013

Now also available via

This is a new episode in the continuing StorageIO industry trends and perspectives pod cast series (you can view more episodes or shows along with other audio and video content here) as well as listening via iTunes or via your preferred means using this RSS feed (https://storageio.com/StorageIO_Podcast.xml)

Storage I/O trends

In this episode from SNW Spring 2013in Orlando Florida, Bruce Ravid (@BruceRave) and me visit with our guest long time storage industry educator Dave Demming of Solution Technology.

Our conversation covers learning and education, from instructor lead to self paced, now and in the future. We also discuss how to learn and transfer knowledge, self-improvement and career development, time management, SNIA and SNW along with FCIA, industry trends. Also discussed are music to learn with, expanding spheres of influence (and here) and keeping the mind active among other things.

Lindsey Stirling

Speaking of learning new things, Dave tells us of a great new musician named Lindsey Stirling that you can check out at Amazon.com (I already bought a copy).

Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Dave Demming.

StorageIO podcast

Also available via

Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

Enjoy this episode from SNW Spring 2013 with Dave Demming.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Speaking of SSDs (with poll)

StorageIO Industry trends and perspectives image

In the spirit of solid state devices (SSD) including DRAM and nand flash, not to mention emerging phase chance memory (PCM) among others that help to boost productivity and cut latency, here are a couple of quick notes and links.

Here are a some more pieces to have a quick look at:
SSD & Real Estate: Location, Location, Location matters
SSD Is in Your Future: Where, When & With What Are the Questions
Storage & IO trends for 2013 and beyond

SSD, flash and DRAM, DejaVu or something new?

Storage I/O ssd timeline image

Is SSD only for performance?
Have SSDs been unsuccessful with storage arrays (with poll)?
End the Hardware Numbers Game

Desum poll planned SSD use image
Image via 21cit (desum): The SSD hardware numbers game

What’s your take on SSD in storage arrays, cast your vote and see results here.

Also check out here what Micron has in mind with merging nand flash with the DDR4 (e.g. DRAM socket) memory bus for servers in a year or two.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

SSD, flash and DRAM, DejaVu or something new?

StorageIO industry trends cloud, virtualization and big data

Recently I was in Europe for a couple of weeks including stops at Storage Networking World (SNW) Europe in Frankfurt, StorageExpo Holland, Ceph Day in Amsterdam (object and cloud storage), and Nijkerk where I delivered two separate 2 day, and a single 1 day seminar.

Image of Frankfurt transtationImage of inside front of ICE train going from Frankfurt to Utrecht

At the recent StorageExpo Holland event in Utrecht, I gave a couple of presentations, one on cloud, virtualization and storage networking trends, the other taking a deeper look at Solid State Devices (SSD’s). As in the past, StorageExpo Holland was great in a fantastic venue, with many large exhibits and great attendance which I heard was over 6,000 people over two days (excluding exhibitor vendors, vars, analysts, press and bloggers) which was several times larger than what was seen in Frankfurt at the SNW event.

Image of Ilja Coolen (twitter @@iCoolen) who was session host for SSD presentation in UtrechtImage of StorageExpo Holland exhibit show floor in Utrecht

Both presentations were very well attended and included lively interactive discussion during and after the sessions. The theme of my second talk was SSD, the question is not if, rather what to use where, how and when which brings us up to this post.

For those who have been around or using SSD for more than a decade outside of cell phones, camera, SD cards or USB thumb drives, that probably means DRAM based with some form of data persistency mechanisms. More recently mention SSD and that implies nand flash-based, either MLC or eMLC or SLC or perhaps emerging mram or PCM. Some might even think of NVRAM or other forms of SSD including emerging mram or mem-resistors among others, however lets stick to nand flash and dram for now.

image of ssd technology evolution

Often in technology what is old can be new, what is new can be seen as old, if you have seen, experienced or done something before you will have a sense of DejaVu and it might be evolutionary. On the other hand, if you have not seen, heard, experienced, or found a new audience, then it can be  revolutionary or maybe even an industry first ;).

Technology evolves, gets improved on, matures, and can often go in cycles of adoption, deployment, refinement, retirement, and so forth. SSD in general has been an on again, off again type cycle technology for the past several decades except for the past six to seven years. Normally there is an up cycle tied to different events, servers not being fast enough or affordable so use SSD to help address performance woes, or drives and storage systems not being fast enough and so forth.

Btw, for those of you who think that the current SSD focused technology (nand flash) is new, it is in fact 25 years old and still evolving and far from reaching its full potential in terms of customer deployment opportunities.

StorageIO industry trends cloud, virtualization and big data

Nand flash memory has helped keep SSD practical for the past several years riding the similar curve that is keeping hard disk drives (HDD’s) that they were supposed  to replace alive. That is improved reliability, endurance or duty cycle, better annual failure rate (AFR), larger space capacity, lower cost, and enhanced interfaces, packaging, power and functionality.

Where SSD can be used and options

DRAM historically at least for enterprise has been the main option for SSD based solutions using some form of data persistency. Data persistency options include battery backup combined with internal HDD’s to de stage information from the DRAM before power was lost. TMS (recently bought by IBM) was one of the early SSD vendors from the DRAM era that made the transition to flash including being one of the first many years ago to combine DRAM as a cache layer over nand flash as a persistency or de-stage layer. This would be an example of if you were not familiar with TMS back then and their capacities, you might think or believe that some more recent introductions are new and revolutionary, and perhaps they are in their own right or with enough caveats and qualifiers.

An emerging trend, which for some will be Dejavu, is that of using more DRAM in combination with nand flash SSD.

Oracle is one example of a vendor who IMHO rather quietly (intentionally or accidentally) has done this in the 7000 series storage systems as well as ExaData based database storage systems. Rest assured they are not alone and in fact many of the legacy large storage vendors have also piled up large amounts of DRAM based cache in their storage systems. For example EMC with 2TByte of DRAM cache in their VMAX 40K, or similar systems from Fujitsu HP, HDS, IBM and NetApp (including recent acquisition of DRAM based CacheIQ) among others. This has also prompted the question of if SSD has been successful in traditional storage arrays, systems or appliances as some would have you believe not, click here to learn more and cast your vote.

SSD, IO, memory and storage hirearchy

So is the future in the past? Some would say no, some will say yes, however IMHO there are lessons to learn and leverage from the past while looking and moving forward.

Early SSD’s were essentially RAM disks, that is a portion of main random access memory (RAM) or what we now call DRAM set aside as a non persistent (unless battery backed up) cache or device. Using a device driver, applications could use the RAM disk as though it were a normal storage system. Different vendors springing up with drivers for various platforms and disappeared as their need were reduced with faster storage systems, interfaces and ram disks drives supplied by vendors, not to mention SSD devices.

Oh, for you tech trivia types, there was also database machines from the late 80’s such as Briton Lee that would offload your database processing functions to a specialized appliance. Sound like Oracle ExaData  I, II or III to anybody?

Image of Oracle ExaData storage system

Ok, so we have seen this movie before, no worries, old movies or shows get remade, and unless you are nostalgic or cling to the past, sure some of the remakes are duds, however many can be quite good.

Same goes with the remake of some of what we are seeing now. Sure there is a generation that does not know nor care about the past, its full speed ahead and leverage what will get them there.

Thus we are seeing in memory databases again, some of you may remember the original series (pick your generation, platform, tool and technology) with each variation getting better. With 64 bit processor, 128 bit and beyond file system and addressing, not to mention ability for more DRAM to be accessed directly, or via memory address extension, combined with memory data footprint reduction or compression, there is more space to put things (e.g. no such thing as a data or information recession).

Lets also keep in mind that the best IO is the IO that you do not have to do, and that SSD which is an extension of the memory map plays by the same rules of real estate. That is location matters.

Thus, here we go again for some of you (DejaVu), while for others get ready for a new and exciting ride (new and revolutionary). We are back to the future with in memory database which while for a time will take some pressure from underlying IO systems until they once again out grow server memory addressing limits (or IT budgets).

However for those who do not fall into a false sense of security, no fear, as there is no such thing as a data or information recession. Sure as the sun rises in the east and sets in the west, sooner or later those IO’s that were or are being kept in memory will need to be de-staged to persistent storage, either nand flash SSD, HDD or somewhere down the road PCM, mram and more.

StorageIO industry trends cloud, virtualization and big data

There is another trend that with more IOs being cached, reads are moving to where they should resolve which is closer to the application or via higher up in the memory and IO pyramid or hierarchy (shown above).

Thus, we could see a shift over time to more writes and ugly IOs being sent down to the storage systems. Keep in mind that any cache historically provides temporal relieve, question is how long of a temporal relief or until the next new and revolutionary or DejaVu technology shows up.

Ok, go have fun now, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

SSD past, present and future with Jim Handy

Now also available via

This is a new episode in the continuing StorageIO industry trends and perspectives pod cast series (you can view more episodes or shows along with other audio and video content here) as well as listening via iTunes or via your preferred means using this RSS feed (https://storageio.com/StorageIO_Podcast.xml)

StorageIO industry trends cloud, virtualization and big data

In this episode, I talk with SSD nand flash and DRAM chip analyst Jim Handy of Objective Analysis at the LSI AIS (Accelerating Innovation Summit) 2012 in San Jose. Our conversation includes SSD past, present and future, market and industry trends, who are doing what and things to keep an eye and ear, open for along with server, storage and memory convergence.

Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Jim and myself.

StorageIO podcast

Also available via

Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

Enjoy this episode SSD Past, Present and Future with Jim Handy.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Have SSDs been unsuccessful with storage arrays (with poll)?

Storage I/O Industry Trends and Perspectives

I hear people talking about how Solid State Devices (SSDs) have not been successful with or for vendors of storage arrays, particular legacy storage systems. Some people have also asserted that large storage arrays are dead at the hands of new purpose-built SSD appliances or storage systems (read more here).

As a reference, legacy storage systems include those from EMC (VMAX and VNX), IBM (DS8000, DCS3700, XIV, and V7000), and NetApp FAS along with those from Dell, Fujitsu, HDS, HP, NEC and Oracle among others.

Granted EMC have launched new SSD based solutions in addition to buying startup eXtremeIO (aka Project X), and IBM bought SSD industry veteran TMS. IMHO, neither of those actions by either vendor signals an early retirement for their legacy storage solutions, instead opening up new markets giving customers more options for addressing data center and IO performance challenges. Keep in mind that the best IO is the one that you do not have to do with the second best being the least impact to applications in a cost-effective way.

SSD, IO, memory and storage hirearchy

Sometimes I even hear people citing or using some other person or source to attribute or make their assertions sound authoritative. You know the game, according to XYZ or, ABC said blah blah blah blah. Of course if you say or repeat something often enough, or hear it again and again, it can become self-convincing (e.g. industry adoption vs. customer deployments). Likewise depending on how many degrees of separation exists between you and the information you get, the more that it can change from what it originally was.

So what about it, has SSD not been successful for legacy storage system vendors and is the only place that SSD has had success is with startups or non-array based solutions?

While there have been some storage systems (arrays and appliances) that may not perform up to their claimed capabilities due to various internal architecture or implementation bottlenecks. For the most part the large vendors including EMC, HP, HDS, IBM, NetApp and Oracle have done very well shipping SSD drives in their solutions. Likewise some of the clean sheet new design based startup systems, as well as some of the startups with hybrid solutions combing HDDs  and SSDs have done well while others are still emerging.

Where SSD can be used and options

This could also be an example where myth becomes reality based on industry adoption vs. customer deployment. What this means is that the myth is that it is the startups that are having success vs. the legacy vendors from an industry adoption conversation standpoint and thus believed by some.

On the other hand, the myth is that vendors such as EMC or NetApp have not had success with their arrays and SSD yet their customer deployments prove otherwise. There is also a myth that only PCIe based SSD can be of value and that drive based SSDs are not worth using which I have a good idea where that myth comes from.

IMHO it is a depends, however safe to say from what I have seen directly that there are some vendors of storage arrays, including so-called legacy systems that have had very good success with SSD. Likewise have seen where some startups have done ok with their new clean sheet designs, including EMC (Project X). Oh, at least for now I am not a believer that with the all SSD based project “X” over at EMC that the venerable VMAX  formerly known as DMX and its predecessors Symmetric have finally hit the end of the line. Rather they will be positioned and play to different markets for some time yet.

Over at IBM I don’t think the DS8000 or XIV or V7000 and SVC folks are winding things down now that they bought SSD vendor TMS who has SSD appliances and PCIe cards. Rest assured there have been success by PCIe flash card vendors both as targets (FusionIO) and cache or hybrid cache and target systems such as those from Intel, LSI, Micron, and TMS (now IBM) among others. Oh, and if you have not noticed, check out what Qlogic, Emulex and some of the other traditional HBA vendors have done with and around SSD caching.

So where does the FUD that storage systems have not had success with SSD come from?

I suspect from those who would rather not see or hear about those who have had success taking away attention from them or their markets. In other words, using Fear, Uncertainty and Doubt (FUD) or some community peer pressure, there is a belief by some that if you hear enough times that something is dead or not of a benefit; you will look at the alternatives.

Care to guess what the preferred alternative is for some? If you guessed a PCIe card or SSD based appliance from your favorite startup that would be a fair assumption.

On the other hand, my educated guess (ok, its much more informed than a guess ;) ) is that if you ask a vendor such as EMC or NetApp they would disagree, while at the same time articulate benefits of different approaches and tools. Likewise, my educated guess is that if you ask some others, they will say mixed things and of course if you talk with the pure plays, take a wild yet educated guess what they will say.

Here is my point.

SSD, DRAM, PCM and storage adoption timeline

The SSD market, including DRAM, nand flash (SLC or MLC or any other xLC), emerging PCM or future mram among other technologies and packaging options is still in its relative infancy. Yes, I know there have been significant industry adoption and many early customer deployments, however talking with IT organizations of all size as well as with vendors and vars, customer deployment of SSD is far from reaching its full potential meaning a bright future.

Simply putting an SSD, card or drive into a solution does not guarantee results.

Likewise having a new architecture does not guarantee things will be faster.

Fast storage systems need fast devices (HDD, HHDD and SSDs) along with fast interfaces to connect with fast servers. Put a fast HDD, HHDD or SSD into a storage system that has bottlenecks (hardware, software, architectural design) and you may not see the full potential of the technology. Likewise put fast ports or interfaces on a storage system that has fast devices however also a bottleneck in its controller has or system architecture and you will not realize the full potential of that solution.

This is not unique to legacy or traditional storage systems, arrays or appliances as it is also the case with new clean sheet designs.

There are many new solutions that are or should be as fast as their touted marketing stories present, however just because something looks impressive in a YouTube video or slide deck or WebEx does not mean it will be fast in your environment. Some of these new design SSD based solutions will displace some legacy storage systems or arrays while many others will find new opportunities. Similar to how previous generation SSD storage appliances found roles complementing traditional storage systems, so to will many of these new generation of products.

What this all means is to navigate your way through the various marketing and architecture debates, benchmarks battles, claims and counter claims to understand what fits your needs and requires.

StorageIO industry trends cloud, virtualization and big data

What say you?

Ok, nuff said

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

IBM buys flash solid state device (SSD) industry veteran TMS

How much flash (or DRAM) based Solid State Device (SSD) do you want or need?

IBM recently took a flash step announcing it wants and needs more SSD capabilities in different packaging and functionality capabilities to meet the demands and opportunities of customers, business partners and prospects by acquiring Texas Memory Systems (TMS).

IBM buys SSD flash vendor TMS

Unlike most of the current generation of SSD vendors besides those actually making the dies (chips or semiconductors) or SSD drives that are startups or relatively new, TMS is the industry veteran. Where most of the current SSD vendors experiences (as companies) is measured in months or at best years, TMS has seen several generations and SSD adoption cycles during its multi-decade existence.

IBM buys SSD vendor Texas Memory Systems TMS

What this means is that TMS has been around during past dynamic random access memory (DRAM) based SSD cycles or eras, as well as being an early adopter and player in the current nand flash SSD era or cycle.

Granted, some in the industry do not consider the previous DRAM based generation of products as being SSD, and vice versa, some DRAM era SSD aficionados do not consider nand flash as being real SSD. Needless to say that there are many faces or facets to SSD ranging in media (DRAM, and nand flash among others) along with packaging for different use cases and functionality.

IBM along with some other vendors recognize that the best type of IO is the one that you do not have to do. However reality is that some type of Input Output (IO) operations need to be done with computer systems. Hence the second best type of IO is the one that can be done with the least impact to applications in a cost-effective way to meet specific service level objectives (SLO) requirements. This includes leveraging main memory or DRAM as cache or buffers along with server-based PCIe SSD flash cards as cache or target devices, along with internal SSD drives, as well as external SSD drives and SSD drives and flash cards in traditional storage systems or appliances as well as purpose-built SSD storage systems.

While TMS does not build the real nand flash single level cell (SLC) or multi-level cell (MLC) SSD drives (like those built by Intel, Micron, Samsung, SANdisk, Seagate, STEC and Western Digital (WD) among others), TMS does incorporate nand flash chips or components that are also used by others who also make nand flash PCIe cards and storage systems.

StorageIO industry trend for storage IO

IMHO this is a good move for both TMS and IBM, both of whom have been StorageIO clients in the past (here, here and here) that was a disclosure btw ;) as it gives TMS, their partners and customers a clear path and large organization able to invest in the technologies and solutions on a go forward basis. In other words, TMS who had looked to be bought gets certainty about their future as do they clients.

IBM who has used SSD based components such as PCIe flash SSD cards and SSD based drives from various suppliers gets a PCIe SSD card of their own, along with purpose-built mature SSD storage systems that have lineages to both DRAM and nand flash-based experiences. Thus IBM controls some of their own SSD intellectual property (e.g. IP) for PCIe cards that can go in theory into their servers, as well as storage systems and appliances that use Intel based (e.g. xSeries from IBM) and IBM Power processor based servers as a platform such. For example DS8000 (Power processor), and Intel based XIV, SONAS, V7000, SVC, ProtecTier and Pursystems (some are Power based).

In addition IBM also gets a field proven purpose-built all SSD storage system to compete with those from startups (Kaminario, Purestorage, Solidfire, Violin and Whiptail among others), as well as those being announced from competitors such as EMC (e.g. project X and project thunder) in addition to SSD drives that can go into servers and storage systems.

The question should not be if SSD is in your future, rather where will you be using it, in the server or a storage system, as a cache or a target, as a PCIe target or cache card or as a drive or as a storage system. This also means the question of how much SSD do you need along with what type (flash or DRAM), for what applications and how configured among other topics.

Storage and Memory Hirearchy diagram where SSD fits

What this means is that there are many locations and places where SSD fits, one type of product or model does not fit or meet all requirements and thus IBM with their acquisition of TMS, along with presumed partnership with other SSD based components will be able to offer a diverse SSD portfolio.

StorageIO industry trend for storage IO

The industry trend is for vendors such as Cisco, Dell, EMC, IBM, HP, NetApp, Oracle and others all of whom are either physical server and storage vendors, or in the case of EMC, virtual servers partnered with Cisco (vBlock and VCE) and Lenovo for physical servers.

Different types and locations for SSD

Thus it only makes sense for those vendors to offer diverse SSD product and solution offerings to meet different customer and application needs vs. having a single solution that users adapt to. In other words, if all you have is a hammer, everything needs to look like a nail, however if you have a tool box of various technologies, then it comes down to being able to leverage including articulating what to use when, where, why and how for different situations.

I think this is a good move for both IBM and TMS. Now lets watch how IBM and TMS can go beyond the press release, slide decks and webex briefings covering why it is a good move to justify their acquisition and plans, moving forward and to see the results of what is actually accomplished near and long-term.

Read added industry trends and perspective commentary about IBM buying TMS here and here, as well as check out these related posts and content:

How much SSD do you need vs. want?
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Has SSD put Hard Disk Drives (HDDs) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD
Speaking of speeding up business with SSD storage
Is SSD dead? No, however some vendors might be
Part I: PureSystems, something old, something new, something from big blue
The Many Faces of Solid State Devices/Disks (SSD)
SSD and Green IT moving beyond green washing

Meanwhile, congratulations to both IBM and TMS, ok, nuff said (for now).

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go

StorageIO industry trends and perspectives

In case you missed it, VMware recently announced spending $1.05 billion USD acquiring startup Nicira for their virtualization and software technology that enables software defined networks (SDN). Also last week Oracle was in the news getting its hands slapped by for making misleading advertisement performance claims vs. IBM.

On the heals of VMware buying Nicira for software defined networking (SDN) or what is also known as IO virtualization (IOV) and virtualized networking, Oracle is now claiming their own SDN capabilities with their announcement of intent to acquire Xsigo. Founded in 2004, Xsigo has a hardware platform combined with software to enable attachment of servers to different Fibre Channel (SAN) and Ethernet based (LAN) networks with their version of IOV.

Now its Oracle who has announced that it will be acquiring IO, networking, virtualization hardware and software vendor Xsigo founded in 2004 for an undisclosed amount. Xsigo has made its name in the IO virtualization (IOV) and converged networking along with server and storage virtualization space over the past several years including partnerships with various vendors.

Buzz word bingo

Technology buzzwords and buzz terms can often be a gray area leaving plenty of room for marketers and PR folks to run with. Case in point AaaS, Big data, Cloud, Compliance, Green, IaaS, IOV, Orchestration, PaaS and Virtualization among other buzzword bingo or XaaS topics. Since Xsigo has been out front in messaging and industry awareness around IO networking convergence of Ethernet based Local Area Networks (LANs) and Fibre Channel (FC) based Storage Area Networks (SANs), along with embracing InfiniBand, it made sense for them to play to their strength which is IO virtualization (aka IOV).

Too me and among others (here and here and here) it is interesting that Xsigo has not laid claims to being part of the software defined networking (SDN) movement or the affiliated OpenFlow networking initiatives as happens with Nicira (and Oracle for that matter). In the press release that the Oracle marketing and PR folks put out on a Monday morning, some of the media and press, both trade industry, financial and general news agency took the Oracle script hook line and sinker running with it.

What was effective is how well many industry trade pubs and their analysts simply picked up the press release story and ran with it in the all too common race to see who can get the news or story out first, or before it actually happens in some cases.

Image of media, news papers

Too be clear, not all pubs jumped including some of those mentioned by Greg Knieriemen (aka @knieriemen) over at SpeakinginTech highlights. I know some who took the time to call, ask around, leverage their journalistic training to dig, research and find out what this really meant vs. simply taking and running with the script. An example of one of those calls that I had was with Beth Pariseu (aka @pariseautt) that you can read her story here and here.

Interesting enough, the Xsigo marketers had not embraced the SDN term sticking with the more known (at least in some circles) VIO and VIO descriptions. What is also interesting is just last week Oracle marketing had their hands slapped by the Better Business Bureau (BBB) NAD after IBM complained about unfair performance based advertisements on ExaData.

Oracle Exadata

Hmm, I wonder if the SDN police or somebody else will lodge a similar complaint with the BBB on behalf of those doing SDN?

Both Oracle and Xsigo along with other InfiniBand (and some Ethernet and PCIe) focused vendors are members of the Open Fabric initiative, not to be confused with the group working on OpenFlow.

StorageIO industry trends and perspectives

Here are some other things to think about:

Oracle has a history of doing different acquisitions without disclosing terms, as well as doing them based on earn outs such as was the case with Pillar.

Oracle use Ethernet in the servers and appliances as well as has been an adopter of InfiniBand primarily for node to node communication, however also for server to application.

Oracle is also an investor in Mellanox the folks that make InfiniBand and Ethernet products.

Oracle has built various stacks including ExaData (Database machine), Exalogic, Exalytics and Database Appliance in addition to their 7000 series of storage systems.

Oracle has done earlier virtualization related acquisitions including Virtual Iron.

Oracle has a reputation with some of their customers who love to hate them for various reasons.

Oracle has a reputation of being aggressive, even by other market leader aggressive standards.

Integrated solution stacks (aka stack wars) or what some remember as bundles continues and Oracle has many solutions.

What will happen to Xsigo as you know it today (besides what the press releases are saying).

While Xsigo was not a member of the Open Networking Forum (ONF), Oracle is.

Xsigo is a member of the Open Fabric Alliance along with Oracle, Mellanox and others interested in servers, PCIe, InfiniBand, Ethernet, networking and storage.

StorageIO industry trends and perspectives

What’s my take?

While there are similarities in that both Nicira and Xsigo are involved with IO Virtualization, what they are doing, how they are doing it, who they are doing it with along with where they can play vary.

Not sure what Oracle paid however assuming that it was in the couple of million dollars or less, cash or combination of stock, both they and the investors as well as some of the employees, friends and family’s did ok.

Oracle also gets some intellectual property that they can combine with other earlier acquisitions via Sun and Virtual Iron along with their investment in InfiniBand (also now Ethernet) vendor Mellanox

Likewise, Oracle gets some extra technology that they can leverage in their various stacked or integrated (aka bundled) solutions for both virtual and physical environments.

For Xsigo customers the good news is that you now know who will be buying the company, however and should be questions about the future beyond what is being said in press releases.

Does this acquisition give Oracle a play in the software defined networking space like Nicira gives to VMware I would say no given their hardware dependency, however it does give Oracle some extra technology to play with.

Likewise while important and a popular buzzword topic (e.g. SDN), since OpenFlow comes up in conversations, perhaps that should be more of the focus vs. if a solution is all software or hardware and software.

StorageIO industry trends and perspectives

I also find it entertaining how last week the Better Business Bureau (BBB) and NAD (National Advertising Division) slapped Oracles hands after IBM complaints of misleading performance claims about Oracle ExaData vs. IBM. The reason I find it entertaining is not that Oracle had its hands slapped or that IBM complained to the BBB, rather how the Oracle marketers and PR folks came up with a spin around what could be called a proprietary SDN (hmm, pSDN ?) story feed it to the press and media who then ran with it.

Im not convinced that this is an all our launch of a war by Oracle vs. Cisco let alone any of the other networking vendors as some have speculated (makes for good headlines though). Instead Im seeing it as more of an opportunistic acquisition by Oracle most likely at a good middle of summer price. Now if Oracle really wanted to go to battle with Cisco (and others), then there are others to buy such as Brocade, Juniper, etc etc etc. However there are other opportunities for Oracle to be focused (or side tracked on right now).

Oh, lets also see what Cisco has to say about all of this which should be interesting.

Additional related links:
Data Center I/O Bottlenecks Performance Issues and Impacts
I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
I/O Virtualization (IOV) Revisited
Industry Trends and Perspectives: Converged Networking and IO Virtualization (IOV)
The function of XaaS(X) Pick a letter
What is the best kind of IO? The one you do not have to do
Why FC and FCoE vendors get beat up over bandwidth?

StorageIO industry trends and perspectives

If you are interested in learning more about IOV, Xisgo, or are having trouble sleeping, click here, here, here, here, here, here, here, here, here, here, here, here, here, or here (I think that’s enough links for now ;).

Ok, nuff said for now as I have probably requalified for being on the Oracle you know what list for not sticking to the story script, opps, excuse me, I mean press release message.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

How much SSD do you need vs. want?

Storage I/O Industry Trends and Perspectives

I have been getting asked by IT customers, VAR’s and even vendors how much solid state device (SSD) storage is needed or should be installed to address IO performance needs to which my standard answer is it depends.

I also am also being asked if there is rule of thumb (RUT) of how much SSD you should have either in terms of the number of devices or a percentage; IMHO, the answer is it depends. Sure, there are different RUTs floating around based on different environments, applications, workloads however are they applicable to your needs.

What I would recommend is instead of focusing on percentages, RUTs, or other SWAG estimate’s or PIROMA calculations, look at your current environment and decide where the activity or issues are. If you know how many fast hard disk drives (HDD) are needed to get to a certain performance level and amount of used capacity that is a good starting point.

If you do not have that information, use tools from your server, storage or third-party provider to gain insight into your activity to help size SSD. Also if you have a database environment and are not familiar with the tools, talk with your DBA’s to have them run some reports that show performance information the two of you can discuss to zero in hot spots or opportunity for SSD.

Keep in mind when looking at SSD what is that you are trying to address by installing SSD. For example, is there a specific or known performance bottleneck resulting in poor response time or latency or is there a general problem or perceived opportunity?

Storage I/O Industry Trends and Perspectives

Is there a lack of bandwidth for large data transfers or is there a constraint on how many IO operations per second (e.g. IOPS) or transaction or activity that can be done in a given amount of time. In other words the more you know where or what the bottleneck is including if you can trace it back to a single file, object, database, database table or other item the closer you are to answering how much SSD you will need.

As an example if using third-party tools or those provided by SSD vendors or via other sources you decide that your IO bottleneck are database transaction logs and system paging files, then having enough SSD space capacity to fit those in part of the solution. However, what happens when you remove the first set of bottlenecks, what new ones will appear and will you have enough space capacity on your SSD to accommodate the next in line hot spot?

Keep in mind that you may want more SSD however what can you get budget approval to buy now without having more proof and a business case. Get some extra SSD space capacity to use for what you are confident can address other bottlenecks, or, enable new capabilities.

On other hand if you can only afford enough SSD to get started, make sure you also protect it. If you decide that two SSD devices (PCIe cache or target cards, drives or appliances) will take care of your performance and capacity needs, make sure to keep availability in mind. This means having extra SSD devices for RAID 1 mirroring, replication or other form of data protection and availability. Keep in mind that while traditional hard disk drive (HDD) storage is often gauged on cost per capacity, or dollar per GByte or dollar per TByte, with SSD measure its value on cost to performance. For example, how many IOPS, or response time improvement or bandwidth are obtained to meet your specific needs per dollar spent.

Related links
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Speaking of speeding up business with SSD storage
Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Are large storage arrays dead at the hands of SSD?

Storage I/O trends

An industry trends and perspective.

.

Are large storage arrays dead at the hands of SSD? Short answer NO not yet.
There is still a place for traditional storage arrays or appliances particular those with extensive features, functionality and reliability availability serviceability (RAS). In other words, there is still a place for large (and small) storage arrays or appliances including those with SSDs.

Is there a place for newer flash SSD storage systems, appliances and architectures? Yes
Similar to how there is a place for traditional midrange storage arrays or appliances have found their roles vs. traditional higher end so-called enterprise arrays. Think as an example  EMC CLARiiON/VNX or HP EVA/P6000 or HDS AMS/HUS or NetApp FAS or IBM DS5000 or IBM V7000 among others vs. EMC Symmetrix/DMX/VMAX or HP P10000/3Par or HDS VSP/USP or IBM DS8000. In addition to traditional enterprise or high-end storage systems and midrange also known as modular, there are also specialized appliances or targets such as for backup/restore and archiving. Also do not forget the IO performance SSD appliances like those from TMS among others that have been around for a while.

Is the role of large storage systems changing or evolving? Yes
Given their scale and ability to do large amounts of work in a dense footprint, for some the role of these systems is still mission critical tier 1 application and data support. For other environments, their role continues to evolve being used for high-density tier 2 bulk or even near-line storage for on-line access at scale.

Storage I/O trends

Does this mean there is completion between the old and new systems? Yes
In some circumstances as we have seen already with SSD solutions. Some will place as competing or replacements while others as complementing. For example in the PCIe flash SSD card segment EMC VFCache is positioned is complementing Dell, EMC, HDS, HP, IBM, NetApp, Oracle or others storage vs. FusionIO who positions as a replacement for the above and others. Another scenario is how some SSD vendors have and continue to position their all-flash SSD arrays using either drives or PCIe cards to complement and coexist with other storage systems in an environment (e.g. data center level tiering) vs. as a replacement. Also keep in mind SSD solutions that also support a mix of flash devices and traditional HDDs for capacity and cost savings or cloud access in the same solution.

Does this mean that the industry has adopted all SSD appliances as the state of art?
Avoid confusing industry adoption or talk with industry and customer deployment. They are similar, however one is focused on what the industry talks about or discusses as state of art or the future while the other is what customers are doing. Certainly some of the new flash SSD appliance and storage startups such as Solidfire, Nexgen, Violin, Whiptail or veteran TMS among others have promising futures, some of which may actually be in play with the current SSD market shakeout and consolidation.

Does that mean everybody is going SSD?
SSD customer adoption and deployment continues to grow, however so too does the deployment of high-capacity HDDs.

Storage I/O trends

Do SSDs need HDDs, do HDDs need SSDs? Yes
Granted there are environments where needs can be addressed by all of one or the other. However at least near term, there is a very strong market for tiering and mix of SSD, some fast HDDs and lots of high-capacity HDDs to meet various needs including performance, availability, capacity, energy and economics. After all, there is no such thing, as a data or information recession yet budgets are tight or being reduced. Likewise, people and data are living longer.

What does this mean?
If there, were no such thing as a data recession and budgets a non-issue, perhaps everything could move to all flash SSD storage systems. However, we also know that people and data are living longer along with changing data life-cycle patterns. There is also the need for performance to close the traditional data center IO performance to space capacity gap and bottlenecks as well as store and keep data longer.

There will continue to be a need for a mix of high-capacity and high performance. More IO will continue to gravitate towards the IO appliances, however more data will settle in for longer-term retention and continued access as data life-cycle continue to evolve. Watch for more SSD and cache in the large systems, along with higher density SAS-NL (SAS Near Line e.g. high capacity) type drives appearing in those systems.

If you like new shiny new toys or technology (SNTs) to buy, sell or talk about, there will be plenty of those to continue industry adoption while for those who are focused on industry deployment, there will be a mix of new, and continued evolution for implementation.

Related links
Industry adoption vs. industry deployment, is there a difference?

Industry trend: People plus data are aging and living longer

No Such Thing as an Information Recession

Changing Lifecycles & Data Footprint Reduction
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Speaking of speeding up business with SSD storage
Are Hard Disk Drives (HDD’s) getting too big?
IT and storage economics 101, supply and demand
Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
Researchers and marketers don’t agree on future of nand flash SSD
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved