IBM Server Side Storage I/O SSD Flash Cache Software

Storage I/O trends

IBM Server Side Storage I/O SSD Flash Cache Software

As I often say, the best server storage I/O or IOP is the one that you do not have to do. The second best storage I/O or IOP is the one with least impact or that can be done in a cost-effective way. Likewise the question is not if solid-state device (SSD) including nand flash are in your future, rather when, where, why, with what, how much along with from whom. Also location matters when it comes to SSD including nand flash with different environments and applications leveraging different placement (locality) options, not to mention how much performance do you need vs. want?

As part of their $1 billion USD (to be spent over three years, or $333.3333 million per year) flash ahead initiative IBM has announced their Flash Cache Storage Accelerator (FCSA) server software. While IBM did not use the term, (congratulations and thank you btw) some creative marketer might want to try calling this Software Defined Cache (SDC) or Software Defined SSD (SDSSD) which if that occurs, apologies in advance ;). Keep in mind that it was about a year ago this time when IBM announced that they were acquiring SSD industry veteran Texas Memory Systems (TMS).

What was announced, introducing Flash Cache Storage Acceleration or FCSA

With this announcement of FCSA slated for customer general availability by end of August, IBM joins EMC and NetApp among other storage systems vendors who developed their own, or have collaborated on server-side IO optimization and cache software. Some of the other startup and established vendors who have IO optimization, performance acceleration and caching software include DataRam (Ramdisk), FusionIO, Infinio (NFS for VMware), Pernix (block for VMware), Proximal and SANdisk (bought flashsoft) among others.

Read more about IBM Flash Cache Software (FCSA) including various questions and perspectives in part two of this two-part post located here.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II: IBM Server Side Storage I/O SSD Flash Cache Software

Storage I/O trends

Part II IBM Server Flash Cache Storage I/O accelerator for SSD

This is the second in a two-part post series on IBM’s Flash Cache Storage Accelerator (FCSA) for Solid State Device (SSD) storage announced today. You can view part I of the IBM FCSA announcement synopsis here.

Some FCSA ssd cache questions and perspectives

What is FCSA?
FCSA is a server-side storage I/O or IOP caching software tool that makes use of local (server-side) nand flash SSD (PCIe cards or drives). As a cache tool (view IBM flash site here) FCSA provides persistent read caching on IBM servers (xSeries, Flex and Blade x86 based systems) with write through cache (e.g. data cached for later reads) while write data is written directly to block attached storage including SANs. back-end storage can be iSCSI, SAS, FC or FCoE based block systems from IBM or others including all SSD, hybrid SSD or traditional HDD based solutions from IBM and others.

How is this different from just using a dedicated PCIe nand flash SSD card?
FCSA complements those by using them as a persistent storage to cache storage I/O reads to boost performance. By using the PCIe nand flash card or SSD drives, FCSA and other storage I/O cache optimization tools free up valuable server-side DRAM from having to be used as a read cache on the servers. On the other hand, caching tools such as FCSA also keep local cached reads closer to the applications on the servers (e.g. locality of reference) reducing the impact on backed shared block storage systems.

What is FCSA for?
With storage I/O or IOPS and application performance in general, location matters due to locality of reference hence the need for using different approaches for various environments. IBM FCSA is a storage I/O caching software technology that reduces the impact of applications having to do random read operations. In addition to caching reads, FCSA also has a write-through cache, which means that while data written to back-end block storage including on iSCSI, SAS, FC or FCoE based storage (IBM or other vendors), a copy of the data is cached for later reads. Thus while the best storage I/O is the one that does not have to be done (e.g. can be resolved from cache), the second best would be writes that go to a storage system that are not competing with read requests (handled via cache).

Storage I/O trends

Who else is doing this?
This is similar to what EMC initially announced and released in February 2012 with VFcache now renamed to be XtremSW along with other caching and IO optimization software from others (e.g. SANdisk, Proximal and Pernix among others.

Does this replace IBM EasyTier?
Simple answer is no, one is for tiering (e.g. EasyTier), the other is for IO caching and optimization (e.g. FCSA).

Does this replace or compete with other IBM SSD technologies?
With anything, it is possible to find a way to make or view it as competitive. However in general FCSA complements other IBM storage I/O optimization and management software tools such as EasyTier as well as leverage and coexist with their various SSD products (from PCIe cards to drives to drive shelves to all SSD and hybrid SSD solutions).

How does FCSA work?
The FCSA software works in either a physical machine (PM) bare metal mode with Microsoft Windows operating systems (OS) such as Server 2008, 2012 among others. There is also *nix support for RedHat Linux, along with in a VMware virtual machine (VM) environment. In a VMware environment High Availability (HA), DRS and VMotion services and capabilities are supported. Hopefully it will be sooner vs. later that we hear IBM do a follow-up announcement (pure speculation and wishful thinking) on more hypervisors (e.g. Hyper-V, Xen, KVM) support along with Centos, Ubuntu or Power based systems including IBM pSeries. Read more about IBM Pure and Flex systems here.

What about server CPU and DRAM overhead?
As should be expected, a minimal amount of server DRAM (e.g. main memory) and CPU processing cycles are used to support the FCSA software and its drivers. Note the reason I say as should be expected is how you can have software running on a server doing any type of work that does not need some amount of DRAM and processing cycles. Granted some vendors will try to spin and say that there is no server-side DRAM or CPU consumed which would be true if they are completely external to the server (VM or PM). The important thing is to understand how much of an impact in terms of CPU along with DRAM consumed along with their corresponding effectiveness benefit that are derived.

Storage I/O trends

Does FCSA work with NAS (NFS or CIFS) back-end storage?
No this is a server-side block only cache solution. However having said that, if your applications or server are presenting shared storage to others (e.g. out the front-end) as NAS (NFS, CIFS, HDFS) using block storage (back-end), then FCSA can cache the storage I/O going to those back-end block devices.

Is this an appliance?
Short and simple answer is no, however I would not be surprised to hear some creative software defined marketer try to spin it as a flash cache software appliance. What this means is that FCSA is simply IO and storage optimization software for caching to boost read performance for VM and PM servers.

What is this hardware or storage agnostic stuff mean?
Simple, it means that FCSA can work with various nand flash PCIe cards or flash SSD drives installed in servers, as well as with various back-end block storage including SAN from IBM or others. This includes being able to use block storage using iSCSI, SAS, FC or FCoE attached storage.

What is the difference between Easytier and FCSA?
Simple, FCSA is providing read acceleration via caching which in turn should offload some reads from affecting storage systems so that they can focus on handling writes or read ahead operations. Easytier on the other hand is for as its name implies tiering or movement of data in a more deterministic fashion.

How do you get FCSA?
It is software that you buy from IBM that runs on an IBM x86 based server. It is licensed on a per server basis including one-year service and support. IBM has also indicated that they have volume or multiple servers based licensing options.

Storage I/O trends

Does this mean IBM is competing with other software based IO optimization and cache tool vendors?
IBM is focusing on selling and adding value to their server solutions. Thus while you can buy the software from IBM for their servers (e.g. no bundling required), you cannot buy the software to run on your AMD/Seamicro, Cisco (including EMC/VCE and NetApp) , Dell, Fujitsu, HDS, HP, Lenovo, Oracle, SuperMicro among other vendors servers.

Will this work on non-IBM servers?
IBM is only supporting FCSA on IBM x86 based servers; however, you can buy the software without having to buy a solution bundle (e.g. servers or storage).

What is this Cooperative Caching stuff?
Cooperative caching takes the next step from simple read cache with write-through to also support chance coherency in a shared environment, as well as leverage tighter application or guest operating system and storage system integration. For example, applications can work with storage systems to make intelligent predictive informed decisions on what to pre-fetch or read ahead and cached, as well as enable cache warming on restart. Another example is where in a shared storage environment if one server makes a change to a shared LUN or volume that the local server-side caches are also updated to prevent stale or inconsistent reads from occurring.

Can FCSA use multiple nand flash SSD devices on the same server?
Yes, IBM FCSA supports use of multiple server-side PCIe and or drive based SSD devices.

How is cache coherency maintained including during a reboot?
While data stored in the nand flash SSD device is persistent, it’s up to the server and applications working with the storage systems to decide if there is coherent or stale data that needs to be refreshed. Likewise, since FCSA is server-side and back-end storage system or SAN agnostic, without cooperative caching it will not know if the underlying data for a storage volume changed without being notified from another server that modified it. Thus if using shared back-end including SAN storage, do your due diligence to make sure multi-host access to the same LUN’s or volumes is being coordinated with some server-side software to support cache coherency, something that would apply to all vendors.

Storage I/O trends

What about cache warming or reloading of the read cache?
Some vendors who have tightly interested caching software and storage systems, something IBM refers to as cooperative caching that can have the ability to re-warm the cache. With solutions that support cache re-warming, the cache software and storage systems work together to main cache coherency while pre-loading data from the underlying storage system based on hot bands or other profiles and experience. As of this announcement, FCSA does not support cache warming on its own.

Does IBM have service or tools to complement FCSA?
Yes, IBM has an assessment, profile and planning tool that are available on a free consultation services basis with a technician to check your environment. Of course, the next logical step would be for IBM to make the tool available via free download or on some other basis as well.

Do I recommend and have I tried FCSA?
On paper, or WebEx, YouTube or other venue FCSA looks interesting and capable, a good fit for some environments particular if IBM server-based. However since my PM and VMware VM based servers are from other vendors, along with the fact that FCSA only runs on IBM servers, have not actually given it a hands on test drive yet. Thus if you are looking at storage I/O optimization and caching software tools for your VM or PM environment, checkout IBM FCSA to see if it meets your needs.

Storage I/O trends

General comments

It is great to see server and storage systems vendors add value to their solutions with I/O and performance optimization as well as caching software tools. However, I am also concerned with the growing numbers of different software tools that only work with one vendor’s servers or storage systems, or at least are supported as such.

This reminds me of a time not all that long ago (ok, for some longer than others) when we had a proliferation of different host bus adapter (HBA) driver and pathing drivers from various vendors. The result is a hodge podge (a technical term) of software running on different operating systems, hypervisors, PM’s, VMs, and storage systems, all of which need to be managed. On the other hand, for the time being perhaps the benefit will outweigh the pain of having different tools. That is where there are options from server-side vendor centric, storage system focused, or third-party software tool providers.

Another consideration is that some tools work in VMware environments; others support multiple hypervisors while others also support bare metal servers or PMs. Which applies to your environment will of course depend. After all, if you are an all VMware environment given that many of the caching tools tend to be VMware focused, that gives more options vs. for those who are still predominately PM environments.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Viking SATADIMM: Nand flash SATA SSD in DDR3 DIMM slot?

Storage I/O trends

Today computer and data storage memory vendor Viking announced that SSD vendor Solidfire has deployed their SATADIMM modules in DDR3 DIMM (e.g. Random Access Memory (RAM) main memory) slots of their SF SSD based storage solution.

solidfire ssd storage with satadimm
Solidfire SD solution with SATADIMM via Viking

Nand flash SATA SSD in a DDR3 DIMM slot?

Per Viking, Solidfire uses the SATADIMM as boot devices and cache to complement the normal SSD drives used in their SF SSD storage grid or cluster. For those not familiar, Solidfire SF storage systems or appliances are based on industry standard servers that are populated with SSD devices which in turn are interconnected with other nodes (servers) to create a grid or cluster of SSD performance and space capacity. Thus as nodes are added, more performance, availability and capacity are also increased all of which are accessed via iSCSI. Learn more about Solidfire SD solutions on their website here.

Here is the press release that Viking put out today:

Viking Technology SATADIMM Increases SSD Capacity in SolidFire’s Storage System (Press Release)

Viking Technology’s SATADIMM enables higher total SSD capacity for SolidFire systems, offering cloud infrastructure providers an optimized and more powerful solution

FOOTHILL RANCH, Calif., August 12, 2013 – Viking Technology, an industry leading supplier of Solid State Drives (SSDs), Non-Volatile Dual In-line Memory Module (NVDIMMs), and DRAM, today announced that SolidFire has selected its SATADIMM SSD as both the cache SSD and boot volume SSD for their storage nodes. Viking Technology’s SATADIMM SSD enables SolidFire to offer enhanced products by increasing both the number and the total capacity of SSDs in their solution.

“The Viking SATADIMM gives us an additional SSD within the chassis allowing us to dedicate more drives towards storage capacity, while storing boot and metadata information securely inside the system,” says Adam Carter, Director of Product Management at SolidFire. “Viking’s SATADIMM technology is unique in the market and an important part of our hardware design.”

SATADIMM is an enterprise-class SSD in a Dual In-line Memory Module (DIMM) form factor that resides within any empty DDR3 DIMM socket. The drive enables SSD caching and boot capabilities without using a hard disk drive bay. The integration of Viking Technology’s SATADIMM not only boosts overall system performance but allows SolidFire to minimize potential human errors associated with data center management, such as accidentally removing a boot or cache drive when replacing an adjacent failed drive.

“We are excited to support SolidFire with an optimal solid state solution that delivers increased value to their customers compared to traditional SSDs,” says Adrian Proctor, VP of Marketing, Viking Technology. “SATADIMM is a solid state drive that takes advantage of existing empty DDR3 sockets and provides a valuable increase in both performance and capacity.”

SATADIMM is a 6Gb SATA SSD with capacities up to 512GB. A next generation SAS solution with capacities of 1TB & 2TB will be available early in 2014. For more information, visit our website www.vikingtechnology.com or email us at sales@vikingtechnology.com.

Sales information is available at: www.vikingtechnology.com, via email at sales@vikingtechnology.com or by calling (949) 643-7255.

About Viking Technology Viking Technology is recognized as a leader in NVDIMM technology. Supporting a broad range of memory solutions that bridge DRAM and SSD, Viking delivers solutions to OEMs in the enterprise, high-performance computing, industrial and the telecommunications markets. Viking Technology is a division of Sanmina Corporation (Nasdaq: SANM), a leading Electronics Manufacturing Services (EMS) provider. More information is available at www.vikingtechnology.com.

About SolidFire SolidFire is the market leader in high-performance data storage systems designed for large-scale public and private cloud infrastructure. Leveraging an all-flash scale-out architecture with patented volume-level quality of service (QoS) control, providers can now guarantee storage performance to thousands of applications within a shared infrastructure. In-line data reduction techniques along with system-wide automation are fueling new block-storage services and advancing the way the world uses the cloud.

What’s inside the press release

On the surface this might cause some to jump to the conclusion that the nand flash SSD is being accessed via the fast memory bus normally used for DRAM (e.g. main memory) of a server or storage system controller. For some this might even cause a jump to conclusion that Viking has figured out a way to use nand flash for reads and writes not only via a DDR3 DIMM memory location, as well as doing so with the Serial ATA (SATA) protocol enabling server boot and use by any operating system or hypervisors (e.g. VMware vSphere or ESXi, Microsoft Hyper-V, Xen or KVM among others).

Note for those not familiar or needing a refresh on DRAM, DIMM and related items, here is an excerpt from Chapter 7 (Servers – Physical, Virtual and Software) from my book "The Green and Virtual Data Center" (CRC Press).

7.2.2 Memory

Computers rely on some form of memory ranging from internal registers, local on-board processor Level 1 (L1) and Level 2 (L2) caches, random accessible memory (RAM), non-volatile RAM (NVRAM) or Flash along with external disk storage. Memory, which includes external disk storage, is used for storing operating system software along with associated tools or utilities, application programs and data. Read more of the excerpt here…

Is SATADIMM memory bus nand flash SSD storage?

In short no.

Some vendors or their surrogates might be tempted to spin such a story by masking some details to allow your imagination to run wild a bit. When I saw the press release announcement I reached out to Tinh Ngo (Director Marketing Communications) over at Viking with some questions. I was expecting the usual marketing spin story, dancing around the questions with long answers or simply not responding with anything of substance (or that requires some substance to believe). Again what I found was the opposite and thus want to share with you some of the types of questions and answers.

So what actually is SATADIMM? See for yourself in the following image (click on it to view or Viking site).

Via Viking website, click on image or here to learn more about SATADIMM

Does SATADIMM actually move data via DDR3 and memory bus? No, SATADIMM only draws power from it (yes nand flash does need power when in use contrary to a myth I was told about).

Wait, then how is data moved and how does it get to and through the SATA IO stack (hardware and software)?

Simple, there is a cable connector that attached to the SATADIMM that in turn attached to an internal SATA port. Or using a different connector cable attach the SATADIMM (up to four) to a standard SAS internal port such as on a main board, HBA, RAID or caching adapter.

industry trend

Does that mean that Viking and who ever uses SATADIMM is not actually moving data or implementing SATA via the memory bus and DDR3 DIMM sockets? That would be correct, data movement occurs via cable connection to standard SATA or SAS ports.

Wait, why would I give up a DDR3 DIMM socket in my server that could be used for more DRAM? Great question and one that should be it depends on if you need more DRAM or more nand flash? If you are out of drive slots or PCIe card slots and have enough DRAM for your needs along with available DDR3 slots, you can stuff more nand flash into those locations assuming you have SAS or SATA connectivity.

satadimm
SATADIMM with SATA connector top right via Viking

satadimm sata connector
SATADIMM SATA connector via Viking

satadimm sas connector
SATADIMM SAS (Internal) connector via Viking

Why not just use the onboard USB ports and plug-in some high-capacity USB thumb drives to cut cost? If that is your primary objective it would probably work and I can also think of some other ways to cut cost. However those are also probably not the primary tenants that people looking to deploy something like SATADIMM would be looking for.

What are the storage capacities that can be placed on the SATADIMM? They are available in different sizes up to 400GB for SLC and 480GB for MLC. Viking indicated that there are larger capacities and faster 12Gb SAS interfaces in the works which would be more of a surprise if there were not. Learn more about current product specifications here.

Good questions. Attached are three images that sort of illustrates the connector. As well, why not a USB drive; well, there are customers that put 12 of these in the system (with up to 480GB usable capacity) that equates to roughly an added 5.7TBs inside the box without touching the drive bays (left for mass HDD’s). You will then need to raid/connect) all the SATADIMM via a HBA.

How fast is the SATADIMM and does putting it into a DDR3 slot speed things up or slow them down? Viking has some basic performance information on their site (here). However generally should be the same or similar to reach a SAS or SATA SSD drive, although keep SSD metrics and performance in the proper context. Also keep in mind that the DDR3 DIMM slot is only being used for power and not real data movement.

Is the SATADIMM using 3Gbs or 6Gbs SATA? Good questions, today is 6Gb SATA (remember that SATA can attach to a SAS port however not vise versa). Lets see if Viking responds in the comments with more including RAID support (hardware or software) along with other insight such as UNMAP, TRIM, Advanced Format (AF) 4KByte blocks among other things.

Have I actually tried SATADIMM yet? No, not yet. However would like to give it a test drive and workout if one were to show up on my doorstep along with disclosure and share the results if applicable.

industry trend

Future of nand flash in DRAM DIMM sockets

Keep in mind that someday nand flash will actually seem not only in a Webex or Powerpoint demo preso (e.g. similar to what Diablo Technology is previewing), as well as in real use for example what Micron earlier this year predicted for flash on DDR4 (more DDR3 vs. DDR4 here).

Is SATADIMM the best nand flash SSD approach for every solution or environment? No, however it does give some interesting options for those who are PCIe card, or HDD and SSD drive slot constrained that also have available DDR3 DIMM sockets. As to price, check with Viking, wish I could say tell them Greg from StorageIO sent you for a good value, however not sure what they would say or do.

Related more reading:
How much storage performance do you want vs. need?
Can RAID extend the life of nand flash SSD?
Can we get a side of context with them IOPS and other storage metrics?
SSD & Real Estate: Location, Location, Location
What is the best kind of IO? The one you do not have to do
SSD, flash and DRAM, DejaVu or something new?

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Seagate provides proof of life: Enterprise HDD enhancements

Storage I/O trends

Proof of life: Enterprise Hard Disk Drives (HDD’s) are enhanced

Last week while hard disk drive (HDD) competitor Western Digital (WD) was announcing yet another (Velobit) in a string of acquisitions ( e.g. earlier included Stec, Arkeia) and investments (Skyera), Seagate announced new enterprise class HDD’s to their portfolio. Note that it was only two years ago that WD acquired Hitachi Global Storage Technologies (HGST) the disk drive manufacturing business of Hitachi Ltd. (not to be confused with HDS).

Seagate

Similar to WD expanding their presence in the growing nand flash SSD market, Seagate also in May of this year extended their existing enterprise class SSD portfolio. These enhancements included new drives with 12Gbs SAS interface, along with a partnership (and investment) with PCIe flash card startup vendor Virident. Other PCIe flash SSD card vendors (manufacturers and OEMs) include Cisco, Dell, EMC, FusionIO, HP, IBM, LSI, Micron, NetApp and Oracle among others.

These new Seagate enterprise class HDD’s are designed for use in cloud and traditional data center servers and storage systems. A month or two ago Seagate also announced new ultra-thin (5mm) client (aka desktop) class HDD’s along with a 3.5 inch 4TB video optimized HDD. The video optimized HDD’s are intended for Digital Video Recorders (DVR’s), Set Top Boxes (STB’s) or other similar applications.

What was announced?

Specifically what Seagate announced were two enterprise class drives, one for performance (e.g. 1.2TB 10K) and the other for space capacity (e.g. 4TB).

 

Enterprise High Performance 10K.7 (aka formerly known as Savio)

Enterprise Terascale (aka formerly known as constellation)

Class/category

Enterprise / High Performance

Enterprise High Capacity

Form factor

2.5” Small Form Factor (SFF)

3.5”

Interface

6Gbs SAS

6Gbs SATA

Space capacity

1,200GB (1.2TB)

4TB

RPM speed

10,000

5,900

Average seek

2.9 ms

12 ms

DRAM cache

64MB

64MB

Power idle / operating

4.8 watts

5.49 / 6.49 watts

Intelligent Power Management (IPM)

Yes – Seagate PowerChoice

Yes – Seagate PowerChoice

Warranty

Limited 5 years

Limited 3 years

Instant Secure Erase (ISE)

Yes

Optional

Other features

RAID Rebuild assist, Self-Encrypting Device (SED)

Advanced Format (AF) 4K block in addition to standard 512 byte sectors

Use cases

Replace earlier generation 3.5” 15K SAS and Fibre Channel HDD’s for higher performance applications including file systems, databases where SSD are not practical fit.

Backup and data protection, replication, copy operations for erasure coding and data dispersal, active in dormant archives, unstructured NAS, big data, data warehouse, cloud and object storage.

Note the Seagate Terascale has a disk rotation speed of 5,900 (5.9K RPM) which is not a typo given the more traditional 5.4K RPM drives. This slight increase in performance from 5.4K to 5.9K should give when combined with other enhancements (e.g. firmware, electronics) to boost performance for higher capacity workloads.

Let us watch for some performance numbers to be published by Seagate or others. Note that I have not had a chance to try these new drives yet, however look forward to getting my hands on them (among others) sometime in the future for a test drive to add to the growing list found here (hey Seagate and WD, that’s a hint ;) ).

What this all means?

Storage I/O trends

Wait, weren’t HDD’s supposed to be dead or dying?

Some people just like new and emerging things and thus will declare anything existing or that they have lost interest in (or their jobs need it) as old, boring or dead.

For example if you listen to some, they may say nand flash SSD are also dead or dying. For what it is worth, imho nand flash-based SSDs still have a bright future in front of them even with new technologies emerging as they will take time to mature (read more here or listen here).

However, the reality is that for at least the next decade, like them or not, HDD’s will continue to play a role that is also evolving. Thus, these and other improvements with HDD’s will be needed until current nand flash or emerging PCM (Phase Change Memory) among other forms of SSD are capable of picking up all the storage workloads in a cost-effective way.

Btw, yes, I am also a fan and user of nand flash-based SSD’s, in addition to HDD’s and see roles for both as being viable complementing each other for traditional, virtual and cloud environments.

In short, HDD’s will keep spinning (pun intended) for some time granted their roles and usage will also evolve similar to that of tape summit resources.

Storage I/O trends

With this announcement by Seagate along with other enhancements from WD show that the HDD will not only see its 60th birthday, (and here), it will probably also easily see its 70th and not from the comfort of a computer museum. The reason is that there is yet another wave of HDD improvements just around the corner including Shingled Magnetic Recording (SMR) (more info here) along with Heat Assisted Magnetic Recording (HAMR) among others. Watch for more on HAMR and SMR in future posts. With these and other enhancements, we should be able to see a return to the rapid density improvements with HDD’s observed during the mid to late 2000 era when Perpendicular recording became available.

What is up with this ISE stuff is that the same as what Xiotech (e.g. XIO) had?

Is this the same technology that Xiotech (now Xio) referred to the ISE the answer is no. This Seagate ISE is for fast secure erase of data on disk. The benefit of Instant Secure Erase (ISE) is to cut from hours or days the time required to erase a drive for secure disposal to seconds (or less). For those environments that already factor drives erase time as part of those overall costs, this can increase the useful time in service to help improve TCO and ROI.

Wait a minute, aren’t slower RPM’s supposed to be lower performance?

Some of you might be wondering or asking the question of wait, how can a 10,000 revolution per minute (10K RPM) HDD be considered fast vs. a 15K HDD, let alone SSD?

Storage I/O trends

There is a trend occurring with HDD’s that the old rules of IOPS or performance being tied directly to the size and rotational speed (RPM’s) of drives, along with their interfaces. This comes down to being careful to judge a book or in this case a drive by its cover. While RPM’s do have an impact on performance, new generation drives at 10K such as some 2.5” models are delivering performance equal to or better than earlier generation 3.5” 15K device’s.

Likewise, there are similar improvements with 5.4K devices vs. previous generation 7.2K models. As you will see in some of the results found here, not all the old rules of thumbs when it comes to drive performance are still valid. Likewise, keep those metrics that matter in the proper context.


Click on above image to see various performance results

For example as seen in the results (above), the more DRAM or DDR cache on the drives has a positive impact on sequential reads which can be good news if that is what your applications need. Thus, do your homework and avoid judging a device simply by its RPM, interface or form factor.

Other considerations, temperature and vibration

Another consideration is that with increased density of more drives being placed in a given amount of space, some of which may not have the best climate controls, humidity and vibration are concerns. Thus, the importance of drives having vibration dampening or safeguards to keep up performance are important. Likewise, even though drive heads and platters are sealed, there are also considerations that need to be taken care of for humidity in data center or cloud service providers in hot environments near the equator.

If this is not connecting with you, think about how close parts of Southeast Asia and the India subcontinent are to the equator along with the rapid growth and low-cost focus occurring there. Your data center might be temperature and humidity controlled, however others who very focused on cost cutting may not be as concerned with normal facilities best practices.

What type of drives should be used for cloud, virtual and traditional storage?

Good question and one where the answer should be it depends upon what you are trying or need to do (e.g. see previous posts here or here and here (via Seagate)).For example here are some tips for big data storage and storage making decisions in general.

Disclosure

Seagate recently invited me along with several other industry analysts to their cloud storage analyst summit in San Francisco where they covered roundtrip coach airfare, lodging, airport transfers and a nice dinner at the Epic Roast house.

hdd image

I also have received in the past a couple of Momentus XT HHDD (aka SSHD) from Seagate. These are in addition to those that I bought including various Seagate, WD along with HGST, Fujitsu, Toshiba and Samsung (SSD and HDD’s) that I use for various things.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II: EMC Evolves Enterprise Data Protection with Enhancements

Storage I/O trends

This is the second part of a two-part series on recent EMC backup and data protection announcements. Read part I here.

What about the products, what’s new?

In addition to articulating their strategy for modernizing data protection (covered in part I here), EMC announced enhancements to Avamar, Data Domain, Mozy and Networker.

Data protection storage systems (e.g. Data Domain)

Building off of previously announced Backup Recovery Solutions (BRS) including Data Domain operating system storage software enhancements, EMC is adding more application and software integration along with new platform (systems) support.

Data Domain (e.g. Protection Storage) enhancements include:

  • Application integration with Oracle, SAP HANA for big data backup and archiving
  • New Data Domain protection storage system models
  • Data in place upgrades of storage controllers
  • Extended Retention now available on added models
  • SAP HANA Studio backup integration via NFS
  • Boost for Oracle RMAN, native SAP tools and replication integration
  • Support for backing up and protecting Oracle Exadata
  • SAP (non HANA) support both on SAP and Oracle

Data in place upgrades of controllers for 4200 series models on up (previously available on some larger models). This means that controllers can be upgraded with data remaining in place as opposed to a lengthy data migration.

Extended Retention facility is a zero cost license that enables more disk drive shelves to be attached to supported Data Domain systems. Thus there is a not a license fee, however you do pay for the storage shelves and drives to increase the available storage capacity. Note that this feature increases the storage capacity by adding more disk drives and does not increase the performance of the Data Domain system. Extended Retention has been available in the past however is now supported via more platform models. The extra storage capacity is essentially placed into a different tier that an archive policy can then migrate data into.

Boost for accelerating data movement to and from Data Domain systems is only available using Fibre Channel. When asked about FC over Ethernet (FCoE) or iSCSI EMC indicated its customers are not asking for this ability yet. This has me wondering if it is that the current customer focus is around FC, or if those customers are not yet ready for iSCSI or FCoE, or, if there were iSCSI or FCoE support, more customers would ask for it?

With the new Data Domain protection storage systems EMC is claiming up to:

  • 4x faster performance than earlier models
  • 10x more scalable and 3x more backup/archive streams
  • 38 percent lower cost per GB based on holding price points and applying improvements


EMC Data Domain data protection storage platform family


Data Domain supporting both backup and archive

Expanding Data Domain from backup to archive

EMC continues to evolve the Data Domain platform from just being a backup target platform with dedupe and replication to a multi-function, multi-role solution. In other words, one platform with many uses. This is an example of using one tool or technology for different purposes such as backup and archiving, however with separate polices. Here is a link to a video where I discuss using common tools for backup and archiving, however with separate polices. In the above figure EMC Data Domain is shown as being used for backup along with storage tiering and archiving (file, email, Sharepoint, content management and databases among other workloads).


EMC Data Domain supporting different functions and workloads

Also shown are various tools from other vendors such as Commvault Simpana that can be used as both a backup or archiving tool with Data Domain as a target. Likewise Dell products acquired via the Quest acquisition are shown along with those from IBM (e.g. Tivoli), FileTek among others. Note that if you are a competitor of EMC or simply a fan of other technology you might come to the conclusion that the above may not be different from others. Then again others who are not articulating their version or vision of something like the above figure probably should be also stating the obvious vs. arguing they did it first.

Data source integration (aka data protection software tools)

It seems like just yesterday that EMC acquired Avamar (2006) and NetWorker aka Legato (2003), not to mention Mozy (2007) or Dantz (Retrospect, since divested) in 2004. With the exception of Dantz (Retrospect) which is now back in the hands of its original developers, EMC continues to enhance and evolve Avamar, Mozy and NetWorker including with this announcement.

General Avamar 7 and Networker 8.1 enhancements include:

  • Deeper integration with primary storage and protection storage tiers
  • Optimization for VMware vSphere virtual server environments
  • Improved visibility and control for data protection of enterprise applications

Additional Avamar 7 enhancements include:

  • More Data Domain integration and leveraging as a repository (since Avamar 6)
  • NAS file systems with NDMP accelerator access (EMC Isilon & Celera, NetApp)
  • Data Domain Boost enhancements for faster backup / recovery
  • Application integration with IBM (DB2 and Notes), Microsoft (Exchange, Hyper-V images, Sharepoint, SQL Server), Oracle, SAP, Sybase, VMware images

Note that Avamar dat is still used mainly for ROBO and desktop, laptop type backup scenarios that do not yet support Data Domain. Also see Mozy enhancements below).

Avamar supports VMware vSphere virtual server environments using granular change block tracking (CBT) technology as well as image level backup and recovery with vSphere plugins. This includes an Instant Access recovery when images are stored on Data Domain storage.

Instant Access enables a VM that has been protected using Avamar image level technology on Data Domain to be booted via an NFS VMware Dat. VMware sees the VM and is able to power it on and boot directly from the Data Domain via the NFS Dat. Once the VM is active, it can be Storage vMotion to a production storage VMware Dat while active (e.g. running) for recovery on the fly capabilities.


Instant Access to a VM on Data Domain storage

EMC NetWorker 8.1 enhancements include:

  • Enhanced visibility and control for owners of data
  • Collaborative protection for Oracle environments
  • Synchronize backup and data protection between DBA and Backup admin’s
  • Oracle DBAs use native tools (e.g. RMAN)
  • Backup admin implements organizations SLA’s (e.g. using Networker)
  • Deeper integration with EMC primary storage (e.g. VMAX, VNX, etc)
  • Isilon integration support
  • Snapshot management (VMAX, VNX, RecoverPoint)
  • Automation and wizards for integration, discovery, simplified management
  • Policy-based management, fast recovery from snapshots
  • Integrating snapshots into and as part of data protection strategy. Note that this is more than basic snapshot management as there is also the ability to roll over a snapshot into a Data Domain protection storage tier.
  • Deeper integration with Data Domain protection storage tier
  • Data Domain Boost over Fibre Channel for faster backups and restores
  • Data Domain Virtual Synthetics to cut impact of full backups
  • Integration with Avamar for managing image level backup recovery (Avamar services embedded as part of NetWorker)
  • vSphere Web Client enabling self-service recovery of VMware images
  • Newly created VMs inherit backup polices automatically

Mozy is being positioned for enterprise remote office branch office (ROBO) or distributed private cloud where Avamar, NetWorker or Data Domain solutions are not as applicable. EMC has mentioned that they have over 800 enterprises using Mozy for desktop, laptop, ROBO and mobile data protection. Note that this is a different target market than the Mozy consumer product focused which also addresses smaller SMBs and SOHOs (Small Office Home Offices).

EMC Mozy enhancements to be more enterprise grade:

  • Simplified management services and integration
  • Active Directory (AD) for Microsoft environments
  • New storage pools (multiple types of pools) vs. dedicated storage per client
  • Keyless activation for faster provisioning of backup clients

Note that EMC enhanced earlier this year Data Protection Advisor (DPA) with version 6.0.

What does this all mean?

Storage I/O trends

Data protection and backup discussions often focus around tape summit resources or cloud arguments, although this is changing. What is changing is growing awareness and discussion around how data protection storage mediums, systems and services are used along with the associated software management tools.

Some will say backup is broke often pointing a finger at a media or medium (e.g. tape and disk) about what is wrong. Granted in some environments the target medium (or media) destination is an easy culprit to point a finger to as the problem (e.g. the usual tape sucks or is dead) mantra. However, for many environments while there can be issues, it is more often than not the media, medium, device or target storage system that is broke, instead how it is being used or abused.

This means revisiting how tools are used along with media or storage systems allocated, used and retained with respect to different threat risk scenarios. After all, not everything is the same in the data center or information factory.

Thus modernizing data protection is more than swapping media or mediums including types of storage system from one to another. It is also more than swapping out one backup or data protection tool for another. Modernizing data protection means rethinking what different applications and data need to be protected against various threat risks.

Storage I/O trends

What this has to do with today’s announcement is that EMC is among others in the industry moving towards a holistic data protection modernizing thought model.

In my opinion what you are seeing out of EMC and some others is taking that step back and expanding the data protection conversation to revisit, rethink why, how, where, when and by whom applications and information get protected.

This announcement also ties into finding and removing costs vs. simply cutting cost at the cost of something elsewhere (e.g. service levels, performance, availability). In other words, finding and removing complexities or overhead associated with data protection while making it more effective.

Some closing points, thoughts and more links:

There is no such thing as a data or information recession
People and data are living longer while getting larger
Not everything is the same in the data center or information factory
Rethink data protection including when, why, how, where, with what and by whom
There is little data, big data, very big data and big fast data
Data protection modernization is more than playing buzzword bingo
Avoid using new technology in old ways
Data footprint reduction (DFR) can be help counter changing data life-cycle patterns
EMC continues to leverage Avamar while keeping Networker relevant
Data Domain evolving for both backup and archiving as an example of tool for multiple uses

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC Evolves Enterprise Data Protection with Enhancements (Part I)

Storage I/O trends

A couple of months ago at EMCworld there were announcements around ViPR, Pivotal along with trust and clouds among other topics. During the recent EMCworld event there were some questions among attendees what about backup and data protection announcements (or lack there of)?

Modernizing Data Protection

Today EMC announced enhancements to its Backup Recovery Solutions (BRS) portfolio (@EMCBackup) that continue to enable information and applications data protection modernizing including Avamar, Data Domain, Mozy and Networker.

Keep in mind you can’t go forward if you can’t go back, which means if you do not have good data protection to go to, you can’t go forward with your information.

EMC Modern Data Protection Announcements

As part of their Backup to the Future event, EMC announced the following:

  • New generation of data protection products and technologies
  • Data Domain systems: enhanced application integration for backup and archive
  • Data protection suite tools Avamar 7 and Networker 8.1
  • Enhanced Cloud backup capabilities for the Mozy service
  • Paradigm shift as part of data protection modernizing including revisiting why, when, where, how, with what and by whom data protection is accomplished.

What did EMC announce for data protection modernization?

While much of the EMC data protection announcement is around product, there is also the aspect of rethinking data protection. This means looking at data protection modernization beyond swapping out media (e.g. tape for disk, disk for cloud) or one backup software tool for another. Instead, revisiting why data protection needs to be accomplished, by whom, how to remove complexity and cost, enable agility and flexibility. This also means enabling data protection to be used or consumed as a service in traditional, virtual and private or hybrid cloud environments.

EMC uses as an example (what they refer to as Accidental Architecture) of how there are different group and areas of focus, along with silos associated with data protection. These groups span virtual, applications, database, server, storage among others.

The results are silos that need to be transformed in part using new technology in new ways, as well as addressing a barrier to IT convergence (people and processes). The theme behind EMC data protection strategy is to enable the needs and requirements of various groups (servers, applications, database, compliance, storage, BC and DR) while removing complexity.

Moving from Silos of data protection to a converged service enabled model

Three data protection and backup focus areas

This sets the stage for the three components for enabling a converged data protection model that can be consumed or used as a service in traditional, virtual and private cloud environments.


EMC three components of modernized data protection (EMC Future Backup)

The three main components (and their associated solutions) of EMC BRS strategy are:

  • Data management services: Policy and storage management, SLA, SLO, monitoring, discovery and analysis. This is where tools such as EMC Data Protection Advisor (aka via WysDM acquisition) fit among others for coordination or orchestration, setting and managing polices along with other activities.
  • Data source integration: Applications, Database, File systems, Operating System, Hypervisors and primary storage systems. This is where data movement tools such as Avamar and Networker among others fit along with interfaces to application tools such as Oracle RMAN.
  • Protection storage: Targets, destination storage system with media or mediums optimized for protecting and preserving data along with enabling data footprint reduction (DFR). DFR includes functionality such as compression and dedupe among others. Example of data protection storage is EMC Data Domain.

Read more about product items announced and what this all means here in the second of this two-part series.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

HDS Mid Summer Storage and Converged Compute Enhancements

Storage I/O trends

Converged Compute, SSD Storage and Clouds

Hitachi Data Systems (HDS) announced today several enhancements to their data storage and unified compute portfolio as part of their Maximize I.T. initiative.

Setting the context

As part of setting the stage for this announcement, HDS has presented the following strategy vision as part their vision for IT transformation and cloud computing.

https://hds.com/solutions/it-strategies/maximize-it.html?WT.ac=us_hp_flash_r11

What was announced

This announcement builds on earlier ones around HDS Unified Storage (HUS) primary storage using nand flash MLC Solid State Devices (SSD) and Hard Disk Drives (HDD’s), along with unified block and file (NAS), as well Unified Compute Platform (UCP) also known as converged compute, networking, storage and software. These enhancements follow recent updates to the HDS Content Platform (HCP) for object, file and content storage.

There are three main focus areas of the announcement:

  • Flash SSD storage enhancements for HUS
  • Unified with enhanced file (aka BlueArc based)
  • Enhanced unified compute (UCP)

HDS Flash SSD acceleration

The question should not be if SSD is in your future, rather when, where, with what and how much will be needed.

As part of this announcement, HDS is releasing an all flash SSD based HUS enterprise storage system. Similar to what other vendors have done, HDS is attaching flash SSD storage to their HUS systems in place of HDD’s. Hitachi has developed their own SSD module announced in 2012 (read more here). The HDS SSD module use Multi Level Cell (MLC) nand flash chips (dies) that now supports 1.6TB of storage space capacity unit. This is different from other vendors who either use nand flash SSD drive form factor devices (e.g. Intel, Micron, Samsung, SANdisk, Seagate, STEC (now WD), WD among others) or, PCIe form factor cards (e.g. FusionIO, Intel, LSI, Micron, Virident among others) or, attach a third-party external SSD device (e.g. IBM/TMS, Violin, Whiptail etc.).

Like some other vendors, HDS has also done more than simply attach a SSD (drive, PCIe card, or external device) to their storage systems calling it an integrated solution. What this means is that HDS has implemented software or firmware changes into their storage systems to manage durability and extend flash duty cycles caused by program erase (P/E) cycle wear. In addition HDS has implemented performance optimization in their storage systems to leverage the faster SSD modules, after all, faster storage media or devices need fast storage systems or controllers.

While the new all flash storage system can be initially bought with just SSD, similar to other hybrid storage solutions, hard disk drives (HDD’s) can also be installed. For enabling full performance at low latency, HDS is addressing both the flash SSD modules as well as the storage systems they attach to including back-end, front-end and caching in-between.

The release enables 500,000 or half a million IOPS (no IOP size, reads or writes, random or sequential. Future firmware (non-disrupted) to enable higher performance that HDS is claiming will be 1,000,000 IOPS at under a millisecond) were indicated.

In addition to future performance improvements, HDS is also indicating increased storage space capacity of its MLC flash SSD modules (1.6TB today). Using 12 modules (1.6TB each), 154TB of flash SSD can be placed in a single rack.

HDS File and Network Attached Storage (NAS)

HUS unified NAS file system and gateway (BlueArc based) enhancements include:

  • New platforms leveraging faster processors (both Intel and Field Programmable Gate Arrays (FPGA’s))
  • Common management and software tools from 3000 to new 4000 series
  • Bandwidth doubled with faster connections and more memory
  • Four 10GbE NAS serving ports (front-end)
  • Four 8Gb Fibre Channel ports (back-end)
  • FPGA leveraged for off-loading some dedupe functions (faster performance)

HDS Unified Complete Platform (UCP)

As part of this announcement, HDS is enhancing the Unified Compute Platform (UCP) offerings. HDS re-entered the compute market in 2012 joining other vendors offering unified compute, storage and networking solutions. The HDS converged data infrastructure competes with AMD (Seamicro) SM15000, Dell vStart and VRTX (for lower end market), EMC and VCE vBlock, NetApp FlexPod along with those from HP (or Moonshot micro servers), IBM Puresystems, Oracle and others.

UCP Pro for VMware vSphere

  • Turnkey converged solution (Compute, Networking, Storage, Software)
  • Includes VMware vSphere pre-installed (OEM from VMware)
  • Flexible compute blade options
  • Three storage system options (HUS, HUS VM and VSP)
  • Cisco and Brocade IP networking
  • UCP Director 3.0 with enhanced automation and orchestration software

UCP Select for Microsoft Private Cloud

  • Supports Hyper-V 3.0 server virtualization
  • Live migration with DR and resynch
  • Microsoft Fast Track certified

UCP Select for Oracle RAC

  • HDS Flash SSD storage
  • SMP x86 compute for performance
  • 2x improvements for IOPS less than 1 millisecond
  • Common management with HiCommand suite
  • Integrated with Oracle RMAN and OVM

UCP Select for SAP HANA

  • Scale out to 8TBs memory (DRAM)
  • Tier 1 storage system certified for SAP HANA DR
  • Leverages SAP HANA SAP storage connector API

What this all means?

Storage I/O trends

With these announcements HDS is extending its storage centric hardware, software and services solution portfolio for block, file and object access across different usage tiers (systems, applications, mediums). HDS is also expanding their converged unified compute platforms to stay competitive with others including Dell, EMC, Fujitsu, HP, IBM, NEC, NetApp and Oracle among others. For environments with HDS storage looking for converged solutions to support VMware, Microsoft Hyper-V, Oracle or SAP HANA these UCP systems are worth checking out as part of evaluating vendor offerings. Likewise for those who have HDS storage exploring SSD offerings, these announcements give opportunities to enable consolidation as do the unified file (NAS) offerings.

Note that now HDS does not have a public formalized message or story around PCIe flash cards, however they have relationships with various vendors as part of their UCP offerings.

Overall a good set of incremental enhancements for HDS to stay competitive and leverage their field proven capabilities including management software tools.

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

June 2013 Server and StorageIO Update Newsletter

StorageIO News Letter Image
June 2013 News letter

Welcome to the June 2013 edition of the StorageIO Update. In this edition coverage includes data center infrastructure management (DCIM), metrics that matter, industry trends, IBM buying Softlayer for Cloud, IaaS and managed services. Other items include backup and data protection topics for SMBs, as well as big data storage topics. Also the EPA has announced a review session for Energy Star for Data Center storage that you can give your comments. Enjoy this edition of the StorageIO Update newsletter.

Click on the following links to view the June 2013 edition as (HTML sent via Email) version, or PDF versions.

Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Ok Nuff said, for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EPA Energy Star Data Center Storage Draft Specification review

Storage I/O trends

For those of you interested in EPA Energy Star for Data Center Storage, here is an announcement for an upcoming conference call and review of the version 1.0 final draft specification.

There are a few attachments referenced in the following note from EPA that can be accessed here:

EPA_Version 1.0 Storage Final Draft Specification Cover Letter
EPA_Version 1.0 Storage Draft 4 Specification Comment Response Document
EPA_Version 1.0 Storage Final Draft Test Method
EPA_Version 1.0 Storage Final Draft Specification

Dear ENERGY STAR® Data Center Storage Partner or Other Interested Party:

Please see the attached important correspondence from the U.S. Environmental Protection Agency concerning the ENERGY STAR Version 1.0 Data Center Storage Final Draft Specification and Test Method. EPA will host a webinar on July 9, 2013 from 3:00 5:00 PM Eastern Time to discuss the documents with stakeholders.  Please RSVP for the webinar to storage@energystar.gov no later than July 5, 2013.

Thank you for your continued support of the ENERGY STAR program.

For more information, visit:www.energystar.gov

This message was sent to you on behalf of ENERGY STAR. Each ENERGY STAR partner organization must have at least one primary contact receiving e-mail to maintain partnership. If you are no longer working on ENERGY STAR, and wish to be removed as a contact, please update your contact status in your MESA account. If you are not a partner organization and wish to opt out of receiving e-mails, you may call the ENERGY STAR Hotline at 1-888-782-7937 and request to have your mass mail settings changed. Unsubscribing means that you will no longer receive program-wide or product-specific e-mails from ENERGY STAR.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

IBM buys Softlayer, for software defined infrastructures and clouds?

Storage I/O trends

IBM today announced that they are acquiring privately held Dallas Texas-based Softlayer and Infrastructure as a Service (IaaS) provider.

IBM is referring to this as Cloud without Compromise (read more about clouds, conversations and confidence here).

It’s about the management, flexibly, scale up, out and down, agility and valueware.

Is this IBM’s new software defined data center (SDDC) or software defined infrastructure (SDI) or software defined management (SDM), software defined cloud (SDC) or software defined storage (SDS) play?

This is more than a software defined marketing or software defined buzzword announcement.
buzzword bingo

If your view of software define ties into the theme of leveraging, unleashing resources, enablement, flexibility, agility of hardware, software or services, then you may see Softlayer as part of a software defined infrastructure.

On the other hand, if your views or opinions of what is or is not software defined align with a specific vendor, product, protocol, model or punditry then you may not agree, particular if it is in opposition to anything IBM.

Cloud building blocks

During today’s announcement briefing call with analysts there was a noticeable absence of software defined buzz talk which given its hype and usage lately, was a refreshing welcome relief. So with that, lets set the software defined conversation aside (for now).

Cloud image

Who is Softlayer, why is IBM interested in them?

Softlayer provide software and services to support both SMB, SME and other environments with bare metal (think traditional hosted servers), along with multi-tenant (shared) cloud virtual public and private cloud service offerings.

Softlayer supports various applications, environments from little data processing to big data analytics to little data processing, from social to mobile to legacy. This includes those app’s or environments that were born in the cloud, or legacy environments looking to leverage cloud in a complimentary way.

Some more information about Softlayer includes:

  • Privately held IaaS firm founded in 2005
  • Estimated revenue run rate of around $400 million with 21,000 customers
  • Mix of SMB, SME and Web-based or born in the cloud customers
  • Over 100,000 devices under management
  • Provides a common modularized management framework set of tools
  • Mix of customers from Web startups to global enterprise
  • Presence in 13 data centers across the US, Asia and Europe
  • Automation, interoperability, large number of API access and supported
  • Flexibility, control and agility for physical (bare metal) and cloud or virtual
  • Public, private and data center to data center
  • Designed for scale, durability and resiliency without complexity
  • Part of OpenStack ecosystem both leveraging and supporting it
  • Ability for customers to use OpenStack, Cloudstack, Citrix, VMware, Microsoft and others
  • Can be white or private labeled for use as a service by VARs

Storage I/O trends

What IBM is planning for Softlayer

Softlayer will report into IBM Global Technology Services (GTS) complimenting existing capabilities which includes ten cloud computing centers on five continents. IBM has created a new Cloud Services Division and expects cloud revenues could be $7 billion annually by the end of 2015. Amazon Web Services (AWS) is estimated to hit about $3.8 Billion by end of 2013. Note that in 2012 AWS target available market was estimated to be about $11 Billion which should become larger moving forward. Rackspace by comparison had recent earning announcements on May 8 2013 of $362 Million with most that being hosting vs. cloud services. That works out to an annualized estimated run rate of $1.448 Billion (or better depending on growth).

I mention AWS and Rackspace to illustrate the growth potential for IBM and Softlayer to discuss the needs of both cloud services customers such as those who use AWS (among other providers), as well as bare metal or hosting or dedicated servers such as with Rackspace among others.

Storage I/O trends

What is not clear at this time is if IBM is combing traditional hosting, managed services, new offerings, products and services in that $7 billion number. In other words if the $7 billion represents what the revenues of the new Cloud Services Division independent of other GTS or legacy offerings as well as excluding hardware, software products from STG (Systems Technology Group) among others, that would be impressive and a challenge to the likes of AWS.

IBM has indicated that it will leverage its existing Systems Technology Group (STG) portfolio of servers and storage extending the capabilities of Softlayer. While currently x86 based, one could expect IBM to leverage and add support for their Power systems line of processors and servers, Puresystems, as well as storage such as XIV or V7000 among others for tier 1 needs.

Some more notes:

  • Ties into IBM Smart Cloud initiatives, model and paradigm
  • This deal is expected to close 3Q 2013, terms or price were not disclosed.
  • Will enable Softlayer to be leveraged on a larger, broader basis by IBM
  • Gives IBM increased access to SMB, SME and web customers than in the past
  • Software and development to stay part of Softlayer
  • Provides IBM an extra jumpstart play for supporting and leveraging OpenStack
  • Compatible and supports Cloustack and Citrix who are also IBM partners
  • Also compatible and supports VMware who is also an IBM partner

Storage I/O trends

Some other thoughts and perspectives

This is a good and big move for IBM to add value and leverage their current portfolios of both services, as well as products and technologies. However it is more than just adding value or finding new routes to markets for those goods and services, it’s also about enablement IBM has long been in the services including managed services, out or in sourcing and hosting business. This can be seen as another incremental evolution of those offerings to both existing IBM enterprise customers, as well to reach new, emerging along with SMB or SME’s that tend to grow up and become larger consumers of information and data infrastructure services.

Further this helps to add some product and meaning around the IBM Smart Cloud initiatives and programs (not that there was not before) giving customers, partners and resellers something tangible to see, feel, look at, touch and gain experience not to mention confidence with clouds.

On the other hand, is IBM signaling that they want more of the growing business that AWS has been realizing, not to mention Microsoft Azure, Rackspace, Centurylink/Savvis, Verizon/Terremark, CSC, HP Cloud, Cloudsigma, Bluehost among many others (if I missed you or your favorite provider, feel free to add it to the comments section). This also gets IBM added Devops exposure something that Softlayer practices, as well as a Openstack play, not to mention cloud, software defined, virtual, big data, little data, analytics and many other buzzword bingo terms.

Congratulations to both IBM and the Softlayer folks, now lets see some execution to watch how this unfolds.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Web chat Thur May 30th: Hot Storage Trends for 2013 (and beyond)

Storage I/O trends

Join me on Thursday May 30, 2013 at Noon ET (9AM PT) for a live web chat at the 21st Century IT (21cit) site (click here to register, sign-up, or view earlier posts). This will be an online web chat format interactive conversation so if you are not able to attend, you can visit at your convenience to view and give your questions along with comments. I have done several of these web chats with 21cit as well as other venues that are a lot of fun and engaging (time flies by fast).

For those not familiar, 21cIT is part of the Desum/UBM family of sites including Internet Evolution, SMB Authority, and Enterprise Efficiency among others that I do article posts, videos and live chats for.


Sponsored by NetApp

I like these types of sites in that while they have a sponsor, the content is generally kept separate between those of editors and contributors like myself and the vendor supplied material. In other words I coordinate with the site editors on what topics I feel like writing (or doing videos) about that align with the given sites focus and themes as opposed to following and advertorial calendar script.

During this industry trends perspective web chat, one of the topics and themes planned for discussion include software defined storage (SDS). View a recent video blog post I did here about SDS. In addition to SDS, Solid State Devices (SSD) including nand flash, cloud, virtualization, object, backup and data protection, performance, management tools among others are topics that will be put out on the virtual discussion table.

Storage I/O trends

Following are some examples of recent and earlier industry trends perspectives posts that I have done over at 21cit:

Video: And Now, Software-Defined Storage!
There are many different views on what is or is not “software-defined” with products, protocols, preferences and even press releases. Check out the video and comments here.

Big Data and the Boston Marathon Investigation
How the human face of big-data will help investigators piece together all the evidence in the Boston bombing tragedy and bring those responsible to justice. Check out the post and comments here.

Don’t Use New Technologies in Old Ways
You can add new technologies to your data center infrastructure, but you won’t get the full benefit unless you update your approach with people, processes, and policies. Check out the post and comments here.

Don’t Let Clouds Scare You, Be Prepared
The idea of moving to cloud computing and cloud services can be scary, but it doesn’t have to be so if you prepare as you would for implementing any other IT tool. Check out the post and comments here.

Storage and IO trends for 2013 (& Beyond)
Efficiency, new media, data protection, and management are some of the keywords for the storage sector in 2013. Check out these and other trends, predictions along with comments here.

SSD and Real Estate: Location, Location, Location
You might be surprised how many similarities between buying real estate and buying SSDs.
Location matters and it’s not if, rather when, where, why and how you will be using SSD including nand flash in the future, read more and view comments here.

Everything Is Not Equal in the Data center, Part 3
Here are steps you can take to give the right type of backup and protection to data and solutions, depending on the risks and scenarios they face. The result? Savings and efficiencies. Read more and view comments here.

Everything Is Not Equal in the Data center, Part 2
Your data center’s operations can be affected at various levels, by multiple factors, in a number of degrees. And, therefore, each scenario requires different responses. Read more and view comments here.

Everything Is Not Equal in the Data center, Part 1
It pays to check your data center Different components need different levels of security, storage, and availability. Read more and view comments here.

Data Protection Modernizing: More Than Buzzword Bingo
IT professionals and solution providers should put technologies such as disk based backup, dedupe, cloud, and data protection management tools as assets and resources to make sure they receive necessary funding and buy in. Read more and view comments here.

Don’t Take Your Server & Storage IO Pathing Software for Granted
Path managers are valuable resources. They will become even more useful as companies continue to carry out cloud and virtualization solutions. Read more and view comments here.

SSD Is in Your Future: Where, When & With What Are the Questions
During EMC World 2012, EMC (as have other vendors) made many announcements around flash solid-state devices (SSDs), underscoring the importance of SSDs to organizations future storage needs. Read more here about why SSD is in your future along with view comments.

Changing Life cycles and Data Footprint Reduction (DFR), Part 2
In the second part of this series, the ABCDs (Archive, Backup modernize, Compression, Dedupe and data management, storage tiering) of data footprint reduction, as well as SLOs, RTOs, and RPOs are discussed. Read more and view comments here.

Changing Life cycles and Data Footprint Reduction (DFR), Part 1
Web 2.0 and related data needs to stay online and readily accessible, creating storage challenges for many organizations that want to cut their data footprint. Read more and view comments here.

No Such Thing as an Information Recession
Data, even older information, must be protected and made accessible cost-effectively. Not to mention that people and data are living longer as well as getting larger. Read more and view comments here.

Storage I/O trends

These real-time, industry trends perspective interactive chats at 21cit are open forum format (however be polite and civil) as well as non vendor sales or marketing pitches. If you have specific questions you ‘d like to ask or points of view to express, click here and post them in the chat room at any time (before, during or after).

Mark your calendar for this event live Thursday, May 30, at noon ET or visit after the fact.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

May 2013 Server and StorageIO Update Newsletter

StorageIO News Letter Image
May 2013 News letter

Welcome to the May 2013 edition of the StorageIO Update. This edition has announcement analysis of EMC ViPR, Software Defined Storage (including a video here), server, storage and I/O metrics that matter for example how many IOPS can a HDD do (it depends). SSD including nand flash remains a popular topic, both in terms of industry adoption and customer deployment. Also included are my perspectives on the SSD vendor FusionIO CEO leaving in a flash. Speaking of nand flash, have you thought about how some RAID implementations and configurations can extend the life along with durability of SSD’s? More on this soon, however check out this video to give you some perspectives.

Click on the following links to view the May 2013 edition as (HTML sent via Email) version, or PDF versions.

Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Ok Nuff said, for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

FusionIO (FIO) SSD vendor CEO out in a flash, whats up with that?

Storage I/O trends

FusionIO (FIO) who recently bought Nexgen to expand their reach from just a server centric to a more broad flash focus has seen their CEO and founder David Flynn race out the door. Not surprisingly, wall street who does not like to be surprised were surprised just a week or two after the most recent earning announcements reacted with a sell off of the FIO stock.

Here is the conundrum, those who were or are fans of Flynn, FIO and their approach along with server centric in your face approach may not be happy with this move.

On the other hand, those were not fans of Flynn, FIO and their approach of getting in your face of having others do so if you did not fall into their ranks may be happy with this move.

One question is was Flynn shown the door and left before it could hit his backside on the way out, or, did he see something and pulled the rip cord on his golden parachute, or some other or combination?

With the recent Nexgen acquisition which could be seen as a move by FIO (and their board of directors) to make more attractive either for an acquisition. Or, to transition from a server-side centric approach to a broader focus.

If the former, perhaps Flynn sees or saw the writing on the wall on who those suitors might or would be and decided to take his money now and run joining the serial entrepreneur ranks now.

Otoh, perhaps Flynn was just too focused with a singular focus and passion on the server space thus not able or interested in transitioning to a broader focus, which might also have involved eating a bit of crow. By eating a bit of crow, I mean given some of the in your face and it’s the FIO way or the highway approach of server only flash.

With Nexgen to be successful that would involve aligning more with the larger vendors and other startups who offer broader portfolios, something that was targeted and mud or fud thrown at by FIO, something that some CEOs or others can have challenges with. It should also be noted that FIO has brought in new employees with experience in broader marketers, not to mention industry veterans like John Spiers of Nexgen.

Candidly, I am not sure which of the above is the scenario, however, for those involved with FIO as employees, partners, customers and shareholders I hope some clarity arrives soon for them. Whether that clarity is via an acquisition (who is one of many questions), or a launching as FIO 2.0 or something similar with a focus on bring more capabilities to customers, increasing their touch points selling more products, hardware, software as opposed to leaving those for others (e.g. their competitors).

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC ViPR software defined object storage part III

Storage I/O trends

This is part III in a series of posts pertaining to EMC ViPR software defined storage and object storage. You can read part I here and part II here.

EMCworld

More on the object opportunity

Other object access includes OpenStack storage part Swift, AWS S3 HTTP and REST API access. This also includes ViPR supporting EMC Atmos, VNX and Isilon arrays as southbound persistent storage in addition.

object storage
Object (and cloud) storage access example

EMC is claiming that over 250 VNX systems can be abstracted to support scaling with stability (performance, availability, capacity, economics) using ViPR. Third party storage will be supported along with software such as OpenStack Swift, Ceph and others running on commodity hardware. Note that EMC has some history with object storage and access including Centera and Atmos. Visit the micro site I have setup called www.objectstoragecenter.com and watch for more content to be updated and added there.

More on the ViPR control plane and controller

ViPR differs from some others in that it does not sit in the data path all the time (e.g. between application servers and storage systems or cloud services) to cut potential for bottlenecks.

ViPR architecture

Organizations that can use ViPR include enterprise, SMB, CSP or MSP and hosting sites. ViPR can be used in a control mode to leverage underlying storage systems, appliances and services intelligence and functionality. This means ViPR can be used to complement as oppose to treat southbound or target storage systems and services as dumb disks or JBOD.

On the other hand, ViPR will also have a suite of data services such as snapshot, replication, data migration, movement, tiering to add value for when those do not exist. Customers will be free to choose how they want to use and deploy ViPR. For example leveraging underlying storage functionality (e.g. lightweight model), or in a more familiar storage virtualization model heavy lifting model. In the heavy lifting model more work is done by the virtualization or abstraction software to create an added value, however can be a concern for bottlenecks depending how deployed.

Service categories

Software defined, storage hypervisor, virtual storage or storage virtualization?

Most storage virtualization, storage hypervisors and virtual storage solutions that are hardware or software based (e.g. software defined) implemented what is referred to as in band. With in band the storage virtualization software or hardware sits between the applications (northbound) and storage systems or services (southbound).

While this approach can be easier to carry out along with add value add services, it can also introduce scaling bottlenecks depending on implementations. Examples of in band storage virtualization includes Actifio, DataCore, EMC VMAX with third-party storage, HDS with third-party storage, IBM SVC (and their V7000 Storwize storage system based on it) and NetApp Vseries among others. An advantage of in band approaches is that there should not need to be any host or server-side software requirements and SAN transparency.

There is another approach called out-of-band that has been tried. However pure out-of-band requires a management system along with agents, drivers, shims, plugins or other software resident on host application servers.

fast path control path
Example of generic fast path control path model

ViPR takes a different approach, one that was seen a few years ago with EMC Invista called fast path, control path that for the most part stays out of the data path. While this is like out-of-band, there should not be a need for any host server-side (e.g. northbound) software. By being a fast path control path, the virtualization or abstraction and management functions stay out of the way for data being moved or work being done.

Hmm, kind of like how management should be, there to help when needed, out-of-the-way not causing overhead other times ;).

Is EMC the first (even with Invista) to leverage fast path control path?

Actually up until about a year or so ago, or shortly after HP acquired 3PAR they had a solution called Storage Virtualization Services Platform (SVPS) that was OEMd from LSI (e.g. StorAge). Unfortunately, HP decided to retire that as opposed to extend its capabilities for file and object access (northbound) as well as different southbound targets or destination services.

Whats this northbound and southbound stuff?

Simply put, think in terms of a vertical stack with host servers (PMs or VMs) on the top with applications (and hypervisors or other tools such as databases) on top of them (e.g. north).

software defined storage
Northbound servers, southbound storage systems and cloud services

Think of storage systems, appliances, cloud services or other target destinations on the bottom (or south). ViPR sits in between providing storage services and management to the northbound servers leveraging the southbound storage.

What host servers can VIPR support for serving storage?

VIPR is being designed to be server agnostic (e.g. virtual or physical), along with operating system agnostic. In addition VIPR is being positioned as capable of serving northbound (e.g. up to application servers) block, file or object as well as accessing southbound (e.g. targets) block, file and object storage systems, file systems or services.

Note that a difference between earlier similar solutions from EMC have been either block based (e.g. Invista, VPLEX, VMAX with third-party storage), or file based. Also note that this means VIPR is not just for VMware or virtual server environments and that it can exist in legacy, virtual or cloud environments.

ViPR image

Likewise VIPR is intended to be application agnostic supporting little data, big data, very big data ( VBD) along with Hadoop or other specialized processing. Note that while VIPR will support HDFS in addition to NFS and CIFS file based access, Hadoop will not be running on or in the VIPR controllers as that would live or run elsewhere.

How will VIPR be deployed and licensed?

EMC has indicated that the VIPR controller will be delivered as software that installs into a virtual appliance (e.g. VMware) running as a virtual machine (VM) guest. It is not clear when support will exist for other hypervisors (e.g. Microsoft Hyper-V, Citrix/XEN, KVM or if VMware vSphere with vCenter or simply on ESXi free version). As of the announcement pre briefing, EMC had not yet finalized pricing and licensing details. General availability is expected in the second half of calendar 2013.

Keep in mind that the VIPR controller (software) runs as a VM that can be hosted on a clustered hypervisor for HA. In addition, multiple VIPR controllers can exist in a cluster to further enhance HA.

Some questions to be addressed among others include:

  • How and where are IOs intercepted?
  • Who can have access to the APIs, what is the process, is there a developers program, SDK along with resources?
  • What network topologies are supported local and remote?
  • What happens when JBOD is used and no advanced data services exist?
  • What are the characteristics of the object access functionality?
  • What if any specific switches or data path devices and tools are needed?
  • How does a host server know to talk with its target and ViPR controller know when to intercept for handling?
  • Will SNIA CDMI be added and when as part of the object access and data services capabilities?
  • Are programmatic bindings available for the object access along with support for other APIs including IOS?
  • What are the performance characteristics including latency under load as well as during a failure or fault scenario?
  • How will EMC place Vplex and its caching model on a local and wide area basis vs. ViPR or will we see those two create some work together, if so, what will that be?

Bottom line (for now):

Good move for EMC, now let us see how they execute including driving adoption of their open APIs, something they have had success in the past with Centera and other solutions. Likewise, let us see what other storage vendors become supported or add support along with how pricing and licensing are rolled out. EMC will also have to articulate when and where to use ViPR vs. VPLEX along with other storage systems or management tools.

Additional related material:
Are you using or considering implementation of a storage hypervisor?
Cloud and Virtual Data Storage Networking (CRC)
Cloud conversations: Public, Private, Hybrid what about Community Clouds?
Cloud, virtualization, storage and networking in an election year
Does software cut or move place of vendor lock-in?
Don’t Use New Technologies in Old Ways
EMC VPLEX: Virtual Storage Redefined or Respun?
How many degrees separate you and your information?
Industry adoption vs. industry deployment, is there a difference?
Many faces of storage hypervisor, virtual storage or storage virtualization
People, Not Tech, Prevent IT Convergence
Resilient Storage Networks (Elsevier)
Server and Storage Virtualization Life beyond Consolidation
Should Everything Be Virtualized?
The Green and Virtual Data Center (CRC)
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Unified storage systems showdown: NetApp FAS vs. EMC VNX
backup, restore, BC, DR and archiving
VMware buys virsto, what about storage hypervisor’s?
Who is responsible for vendor lockin?

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved