|
Part II: EMC announces XtremIO General Availability, speeds and feeds
XtremIO flash SSD more than storage I/O speed
Following up part I of this two-part series, here are more more details, insights and perspectives about EMC XtremIO and it’s generally availability that were announced today.
XtremIO the basics
- All flash Solid State Device (SSD) based solution
- Cluster of up to four X-Brick nodes today
- X-Bricks available in 10TB increments today, 20TB in January 2014
- 25 eMLC SSD drives per X-Brick with redundant dual processor controllers
- Provides server-side iSCSI and Fibre Channel block attachment
- Integrated data footprint reduction (DFR) including global dedupe and thin provisioning
- Designed for extending duty cycle, minimizing wear of SSD
- Removes need for dedicated hot spare drives
- Capable of sustained performance and availability with multiple drive failure
- Only unique data blocks are saved, others tracked via in-memory meta data pointers
- Reduces overhead of data protection vs. traditional small RAID 5 or RAID 6 configurations
- Eliminates overhead of back-end functions performance impact on applications
- Deterministic storage I/O performance (IOPs, latency, bandwidth) over life of system
When would you use XtremIO vs. another storage system?
If you need all enterprise like data services including thin provisioning, dedupe, resiliency with deterministic performance on an all-flash system with raw capacity from 10-40TB (today) then XtremIO could be a good fit. On the other hand, if you need a mix of SSD based storage I/O performance (IOPS, latency or bandwidth) along with some HDD based space capacity, then a hybrid or traditional storage system could be the solution. Then there are hybrid scenarios where a hybrid storage system, array or appliance (mix of SSD and HDD) are used for most of the applications and data, with an XtremIO handling more tasks that are demanding.
How does XtremIO compare to others?
EMC with XtremIO is taking a different approach than some of their competitors whose model is to compare their faster flash-based solutions vs. traditional mid-market and enterprise arrays, appliances or storage systems on a storage I/O IOP performance basis. With XtremIO there is improved performance measured in IOPs or database transactions among other metrics that matter. However there is also an emphasis on consistent, predictable, quality of service (QoS) or what is known as deterministic storage I/O performance basis. This means both higher IOPs with lower latency while doing normal workload along with background data services (snapshots, data footprint reduction, etc).
Some of the competitors focus on how many IOPs or work they can do, however without context or showing impact to applications when back-ground tasks or other data services are in use. Other differences include how cluster nodes are interconnected (for scale out solutions) such as use of Ethernet and IP-based networks vs dedicated InfiniBand or PCIe fabrics. Host server attachment will also differ as some are only iSCSI or Fibre Channel block, or NAS file, or give a mix of different protocols and interfaces.
An industry trend however is to expand beyond the flash SSD need for speed focus by adding context along with QoS, deterministic behavior and addition of data services including snapshots, local and remote replication, multi-tenancy, metering and metrics, security among other items.
Who or what are XtremIO competition?
To some degree vendors who only have PCIe flash SSD cards might place themselves as the alternative to all SSD or hybrid mixed SSD and HDD based solutions. FusionIO used to take that approach until they acquired NexGen (a storage system) and now have taken a broader more solution balanced approach of use the applicable tool for the task or application at hand.
Other competitors include the all SSD based storage arrays, systems or appliance vendors which includes legacy existing as well as startups vendors that include among others IBM who bought TMS (flashsystems), NetApp (EF540), Solidfire, Pure, Violin (who did a recent IPO) and Whiptail (bought by Cisco). Then there are the hybrid which is a long list including Cloudbyte (software), Dell, EMCs other products, HDS, HP, IBM, NetApp, Nexenta (Software), Nimble, Nutanix, Oracle, Simplivity and Tintri among others.
What’s new with this XtremIO announcement
10TB X-Bricks enable 10 to 40TB (physical space capacity) per cluster (available on 11/19/13). 20TB X-Bricks (larger capacity drives) will double the space capacity in January 2014. If you are doing the math, that means either a single brick (dual controller) system, or up to four bricks (nodes, each with dual controllers) configurations. Common across all system configurations are data features such as thin provisioning, inline data footprint reduction (e.g. dedupe) and XtremIO Data Protection (XDP).
What does XtremIO look like?
XtremIO consists of up to four nodes (today) based on what EMC calls X-Bricks.
25 SSD drive X-Brick
Each 4U X-Brick has 25 eMLC SSD drives in a standard EMC 2U DAE (disk enclosure) like those used with the VNX and VMAX for SSD and Hard Disk Drives (HDD). In addition to the 2U drive shelve, there are a pair of 1U storage processors (e.g. controllers) that give redundancy and shared access to the storage shelve.
XtremIO X-Brick block diagram
XtremIO storage processors (controllers) and drive shelve block diagram. Each X-Brick and their storage processors or controllers communicate with each other and other X-Bricks via a dedicated InfiniBand using Remote Direct Memory Access (RDMA) fabric for memory to memory data transfers. The controllers or storage processors (two per X-Brick) each have dual processors with eight cores for compute, along with 256GB of DRAM memory. Part of each controllers DRAM memory is set aside as a mirror its partner or peer and vise versa with access being over the InfiniBand fabric.
XtremIO X-Brick four node fabric cluster or instance
How XtremIO works
Servers access XtremIO X-Bricks using iSCSI and Fibre Channel for block access. A responding X-Brick node handles the storage I/O request and in the case of a write updates other nodes. In the case of a write, the handling node or controller (aka storage processor) checks its meta data map in memory to see if the data is new and unique. If so, the data gets saved to SSD along with meta data information updated across all nodes. Note that data gets ingested and chunked or sharded into 4KB blocks. So for example if a 32KB storage I/O request from the server arrives, that is broken (e.g. chunk or shard) into 8 4KB pieces each with a mathematical unique fingerprint created. This fingerprint is compared to what is known in the in memory meta data tables (this is a hexadecimal number compare so a quick operation). Based on the comparisons if unique the data is saved and pointers created, if already exists, then pointers are updated.
In addition to determining if unique data, the fingerprint is also used for generate a balanced data dispersal plan across the nodes and SSD devices. Thus there is the benefit of reducing duplicate data during ingestion, while also reducing back-end IOs within the XtremIO storage system. Another byproduct is the reduction in time spent on garbage collection or other background tasks commonly associated with SSD and other storage systems.
Meta data is kept in memory with a persistent copied written to reserved area on the flash SSD drives (think of as a vault area) to support and keep system state and consistency. In between data consistency points the meta data is kept in a log journal like how a database handles log writes. What’s different from a typical database is that XtremIO XIOS platform software does these consistency point writes for persistence on a granularity of seconds vs. hours or minutes.
What about rumor that XtremIO can only do 4KB IOPs?
Does this mean that the smallest storage I/O or IOP that XtremIO can do is 4GB?
That is a rumor or some fud I have heard floated by a competitor (or two or three) that assumes if only 4KB internal chunk or shard being used for processing, that must mean no IOPs smaller than 4KB from a server.
XtremIO can do storage I/O IOP sizes of 512 bytes (e.g. the standard block size) as do other systems. Note that the standard server storage I/O block or IO size is 512 bytes or multiples of that unless the new 4KB advanced format (AF) block size being used which based on my conversations with EMC, AF is not supported, yet. (Updated 11/15/13 EMC has indicated that host (front-end) 4K AF support, along with 512 byte emulation modes are available now with XIOS). Also keep in mind that since XtremIO XIOS internally is working with 4KB chunks or shards, that is a stepping stone for being able to eventually leverage back-end AF drive support in the future should EMC decide to do so (Updated 11/15/13 Waiting for confirmation from EMC about if back-end AF support is now enabled or not, will give more clarity as it is recieved).
What else is EMC doing with XtremIO?
- VCE Vblock XtremIO systems for SAP HANA (and other databases) in memory databases along with VDI optimized solutions.
- VPLEX and XtremIO for extended distance local, metro and wide area HA, BC and DR.
- EMC PowerPath XtremIO storage I/O path optimization and resiliency.
- Secure Remote Support (aka phone home) and auto support integration.
Boosting your available software license minutes (ASLM) with SSD
Another use of SSD has been in the past the opportunity to make better use of servers stretching their usefulness or delaying purchase of new ones by improving their effective use to do more work. In the past this technique of using SSDs to delay a server or CPU upgrade was used when systems when hardware was more expensive, or during the dot com bubble to fill surge demand gaps. This has the added benefit of stretching database and other expensive software licenses to go further or do more work. The less time servers spend waiting for IOP’s means more time for doing useful work and bringing value of the software license. Otoh, the more time spent waiting is lot available software minutes which is cost overhead.
Think of available software licence minutes (ASLM) in terms of available software license minutes where if doing useful work your software is providing value. On the other hand if those minutes are not used for useful work (e.g. spent waiting or lost due to CPU or server or IO wait, then they are lost). This is like airlines and available seat miles (ASM) metric where if left empty it’s a lost opportunity, however if used, then value, not to mention if yield management applied to price that seat differently. To make up for that loss many organizations have to add extra servers and thus more software licensing costs.
Can we get a side of context with them metrics?
EMC along with some other vendors are starting to give more context with their storage I/O performance metrics that matter than simple IOP’s or Hero Marketing Metrics. However context extends beyond performance to also availability and space capacity which means data protection overhead. As an example, EMC claims 25% for RAID 5 and 20% for RAID 6 or 30% for RAID 5/RAID 6 combo where a 25 drive (SSD) XDP has a 8% overhead. However this assumes a 4+1 (5 drive) RAID , not apples to apples comparison on a space overhead basis. For example a 25 drive RAID 5 (24+1) would have around an 4% parity protection space overhead or a RAID 6 (23+2) about 8%.
Granted while the space protection overhead might be more apples to apples with the earlier examples to XDP, there are other differences. For example solutions such as XDP can be more tolerant to multiple drive failures with faster rebuilds than some of the standard or basic RAID implementations. Thus more context and clarity would be helpful.
StorageIO would like see vendors including EMC along with startups who give data protection space overhead comparisons without context to do so (and applaud those who provide context). This means providing the context for data protection space overhead comparisons similar to performance metrics that matter. For example simply state with an asterisk or footnote comparing a 4+1 RAID 5 vs. a 25 drive erasure or forward error correction or dispersal or XDP or wide stripe RAID for that matter (e.g. can we get a side of context). Note this is in no way unique to EMC and in fact quite common with many of the smaller startups as well as established vendors.
General comments
My laundry list of items which for now would be nice to have’s, however for you might be need to have would include native replication (today leverages Recover Point), Advanced Format (4KB) support for servers (Updated 11/15/13 Per above, EMC has confirmed that host/server-side (front-end) AF along with 512 byte emulation modes exist today), as well as SSD based drives, DIF (Data Integrity Feature), and Microsoft ODX among others. While 12Gb SAS server to X-Brick attachment for small in the cabinet connectivity might be nice for some, more practical on a go forward basis would be 40GbE support.
Now let us see what EMC does with XtremIO and how it competes in the market. One indicator to watch in the industry and market of the impact or presence of EMC XtremIO is the amount of fud and mud that will be tossed around. Perhaps time to make a big bowl of popcorn, sit back and enjoy the show…
Ok, nuff said (for now).
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
EMC announces XtremIO General Availability (Part I)
EMC announces XtremIO flash SSD General Availability
EMC announced today the general availability (GA) if the all flash Solid State Device (SSD) XtremIO that they acquired a little over a year ago. Earlier this year EMC announced directed availability (DA) of the EMC version of XtremIO as part of other SSD hardware and software updates (here and here). The XtremIO GA announcement also follows that of the VNX2 or MCx released in September of this year that also has flash SSD enhancements along with doing more with available resources.
EMC XtremIO flash SSD boosting storage I/O performance
As an industry trend, the question is not if SSD is in your future, rather where, when, how much, what to use along with coexistence to complement Hard Disk Drive (HDD) based solutions in some environments. This also means that SSD is like real estate where location matters, not to mention having different types of technologies, packaging, solutions to meet various needs (and price points). This all ties back to the best server and storage I/O or IOP is the one that you do know have to do, the second best is the one with the least impact and best application benefit.
From industry adoption to customer deployment
EMC has evolved the XtremIO platform from a pre-acquisition solution to an first EMC version that was offered to an early set of customers e.g. DA.
I suspect that the DA was as much a focus on getting early customer feedback, addressing immediate needs or opportunities as wells as getting the EMC sales and marketing teams messaging, marching orders aligned and deployed. The latter would be rather important to decrease or avoid the temptation to cannibalize existing product sales with the shiny new technology (SNT). Likewise, it would be important for EMC to not create isolated pockets or fenced off products as some other vendors often do.
25 SSD drive X-Brick
What is being announced?
- General availability vs. directed or limited availability
- Version 2.2 of the XIOS platform software
- Integrating with EMC support and service tools
Let us get back go this announcement and XtremIO of which EMC has indicated that they have several customers who have now done either $1M or $5M USD deals. EMC has claimed over 1.5 PBytes have been booked and deployed, or with data footprint reduction (DFR) including dedupe over 10PB effective capacity. Note that for those who are focused on dedupe or DFR reduction ratios 10:1.5 may not be as impressive as seen with some backup solutions, however keep in mind that this is for primary high performance storage vs. secondary or tertiary storage devices.
As part of this announcement, EMC has also release V2.2 of the XtremIO platform software (XIOS). Hence a normal new product should start with a version 1.0 at launch, however as explained this is both a new version of the technology as well as the initial GA by EMC.
Also as part of this announcement, EMC is making available XtremIO 10TB X-Bricks with 25 eMLC SSD drives each, along with dual controllers (storage processors). EMC has indicated that it will make available a 20TB X-Brick using larger capacity SSD drives in January 2014. Note that the same type of SSD drives must be used in the systems. Currently there can be up to four X-Bricks per XtremIO cluster or instance that are interconnected using a dedicated InfiniBand Fabric. Application servers access the XtremIO X-Bricks using standard Fibre Channel or Ethernet and IP based iSCSI. In addition to the hardware platform items, the XtremIO platform software (XIOS) includes built-in on the fly data footprint reduction (DFR) using global dedupe during data ingestion and placement. Other features include thin provisioning, VMware VAII, data protection and self-balancing data placement.
Who or what applications are XtremIO being positioned for?
Some of XtremIO industry sectors include:
- Financial and insurance services
- Medical, healthcare and life sciences
- Manufacturing, retail and warehouse management
- Government and defense
- Media and entertainment
Application and workload focus:
- VDI including replacing linked clones with ability to do full clone without overhead
- Server virtualization where aggregation causes aggravation with many mixed IOPs
- Database for reducing latency, boosting IOPs as well as improving software license costs.
Databases such as IBM DB2, Oracle RAC, Microsoft SQLserver and MySQL among others have traditionally for decades been a prime opportunity for SSD (DRAM and flash). This also includes newer NoSQL or key value stores and meta data repositories for object such as Mongo, Hbase, Cassandra, Riak among others. Typical focus includes placing entire instances, or specific files and objects such as indices, journals and redo logs, import/export temp or scratch space, message queries and high activity tables among others.
What about overlap with other EMC products?
If you simply looked at the above list of sectors (among others) or applications, you could easily come to a conclusion that there is or would be overlap. Granted in some environments there will be which means XtremIO (or other vendors solutions) may be the primary storage solution. On the other hand since everything is not the same in most data centers or information factories, there will be a mix of storage systems handling various tasks. This is where EMC will need to be careful learning what they did during DA on where to place XtremIO and how to positing to complement when and where needed other solutions, or as applicable being a replacement.
XtremIO Announcement Summary
- All flash SSD storage solution with iSCSI and Fibre Channel server attachment
- Scale out and scale up performance while keeping latency low and deterministic
- Enhanced flash duty cycle (wear leveling) to increase program / erase (P/E) cycles durability
- Can complement other storage systems, arrays or appliances or function as a standalone
- Coexists and complements host side caching hardware and software
- Inline always on data footprint reduction (DFR) including dedupe (global dedupe without performance compromise), space saving snapshots and copies along with thin provisioning
Some General Comment and Perspectives
Overall, XtremIO gives EMC and their customers, partners and prospects a new technology to use and add to their toolbox for addressing various challenges. SSD is in your future, when, where, with what and how are questions not to mention how much. After all, a bit of flash SSD in the right location used effectively can have a large impact. On the other hand, a lot of flash SSD in the wrong place or not used effectively will cost you lots of cash. Key for EMC and their partners will be to articulate clearly, where XtremIO fits vs. other solutions without adding complexity.
Checkout part II of this series to learn more about XtremIO including what it is, how it works, competition and added perspectives.
Ok, nuff said (for now).
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Some fall 2013 AWS cloud storage and compute enhancements
Some fall 2013 AWS cloud storage and compute enhancements
I just received via Email the October Amazon Web Services (AWS) Newsletter in advance of the re:Invent event next week in Las Vegas (yes I will be attending).
AWS October newsletter and enhancement updates
- EC2 (Elastic Cloud Compute) has been enhanced with reserved micro instances
- EBS (Elastic Block Storage) IOPS to volume capacity ratio requirement has been changed from 10:1 t 30:1 meaning that 4,000 IOP’s per 133GB volume are now possible (as opposed to 400GB volumes to achieve 4,000 IOPS).
- Redshift service is now available in the Asia Pacific Singapore and Sydney regions
What this means
AWS is arguably the largest of the public cloud services with a diverse set of services and options across multiple geographic regions to meet different customer needs. As such it is not surprising to see AWS continue to expand their service offerings expanding their portfolio both in terms of features, functionalities along with extending their presences in different geographies.
Lets see what else AWS announces next week in Las Vegas at their 2013 re:Invent event.
Click here to view the current October 2013 AWS newsletter. You can view (and signup for) earlier AWS newsletters here, and while you are at it, view the current and recent StorageIO Update newsletters here.
Ok, nuff said (for now)
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
What does gaining industry traction or adoption mean too you?
What does gaining industry traction or adoption mean too you?
Is it based on popularity or how often something that is talked about, blogged, tweeted, commented, video or similar?
What are the indicators that something is gaining traction?
Perhaps it is tied to the number of press releases, product or staffing announcements including who has joined the organization along with added coverage of it?
Maybe its based on how many articles, videos or some other content and coverage that helps to show traction and momentum?
On the other hand is it tied to how many prospects are actually trying a product or service as part of a demo or proof of concept?
Then again, maybe it is associated with how many real paying or revenue installed footprints and customers or what is also known as industry deployment (customer adoption).
Of those customers actually buying and deploying, how many have continued using the technology even after industry adoption subsides or does the solution become shelf ware?
Does the customer deployment actually continue to rise quietly while industry adoption or conversations drop off (past the cycle of hype)?
Gaining context with industry traction
Gaining traction can mean different things to people, however there is also a difference between industry adoption (what’s being talked about among the industry) and industry deployment (what customers are actually buying, installing and continue to use).
Often the two can go hand in hand, usually one before the other, however they can also be separate. For example it is possible that something new will have broad industry adoption (being talked about) yet have low customer deployment (even over time). This occurs when something is new and interesting that might be fun to talk about or the vendor, solution provider is cool and fun to hang out and be with, or simply has cool giveaways.
On the other hand there can be customer deployment and adoption with little to no fan fare (industry adoption) for different reasons.
Here’s my point
Not long ago if you asked or listened to some, you would think that once high-flying cloud storage vendor Nirvanix was gaining traction based on their marketing along with other activities, yet they recently closed their doors. Then there was Kim Dotcoms hyped Megacloud launch earlier this year that also has now gone dark or shutting down. This is not unique to cloud service providers or solutions as the same can, has and will happen again to traditional hardware, software and services providers (startups and established).
How about former high-flying FusionIO, or the new startup by former FusionIO founder and CEO David Flynn called Primary Data. One of the two is struggling to gain or keep up revenue traction while having declined in industry popularity traction. The other is gaining in industry popularity traction with their recently secured $50 Million in funding yet are still in stealth mode so rather difficult to gain customer adoption or deployment traction (thus for now its industry adoption focus for them ;).
If you are a customer or somebody actually deploying and using technology, tools, techniques and services for real world activity vs. simply trying new things out, your focus on what is gaining traction will probably be different than others. Granted it is important to keep an eye on what is coming or on futures, however there is also the concern of how it will really work and keep working over time.
For example while Hard Disk Drives (HDD) continue to support industry deployment traction (customer adoption and usage) traction. However they are not new and when new models apear (such as Seagate Ethernet based Kinetic) they may not get the same industry adoption traction as a newer technology might. Case in point Solid State Devices (SSD) continue to gain in customer deployment adoption with some environments doing more than others, yet have very high industry adoption traction status.
Relative SSD customer adoption and deployment along with future opportunities
On the other hand if your focus is on what’s new and emerging which is usually more industry centered, then it should be no surprise what traction means and where it is focused. For example the following figure shoes where different audiences have various timelines on adoption (read more here).
Current and emerging memory, flash and other SSD technologies for different audiences
Wrap up
When you hear that something is gaining traction, ask yourself (or others) what that means along with the applicable context.
Does that mean something is popular and trending to discuss (based on GQ or looks), or that it is actually gaining real customer adoption based on G2 (insight – they are actually buying vs. simply trying our a free version).
Does it mean one form of traction along with industry adoption (what’s being talked about) vs. industry deployment (real customer adoption) is better than the other?
No, it simply means putting things into the applicable context.
Ok, nuff said (for now).
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Fall 2013 StorageIO Update Newsletter
Fall 2013 StorageIO Update Newsletter
|
Ok, nuff said (for now)
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Seagate Kinetic Cloud and Object Storage I/O platform (and Ethernet HDD)
Seagate Kinetic Cloud and Object Storage I/O platform
Seagate announced today their Kinetic platform and drive designed for use by object API accessed storage including for cloud deployments. The Kinetic platform includes Hard Disk Drives (HDD) that feature 1Gb Ethernet (1 GbE) attached devices that speak object access API or what Seagate refers to as a key / value.
What is being announced with Seagate Kinetic Cloud and Object (Ethernet HDD) Storage?
- Kinetic Open Storage Platform – Ethernet drives, key / value (object access) API, partner software
- Software developer’s kits (SDK) – Developer tools, documentation, drive simulator, code libraries, code samples including for SwiftStack and Riak.
- Partner ecosystem
What is Kinetic?
While it has 1 GbE ports, do not expect to be able to use those for iSCSI or NAS including NFS, CIFS or other standard access methods. Being Ethernet based, the Kinetic drive only supports the key value object access API. What this means is that applications, cloud or object stacks, key value and NoSQL data repositories, or other software that adopt the API can communicate directly using object access.
Internal, the HDD functions as a normal drive would store and accessing data, the object access function and translation layer shifts from being in an Object Storage Device (OSD) server node to inside the HDD. The Kinetic drive takes on the key value API personality over 1 GbE ports instead of traditional Logical Block Addressing (LBA) and Logical Block Number (LBN) access using 3g, 6g or emerging 12g SAS or SATA interfaces. Instead Kinetic drives respond to object access (aka what Seagate calls key / value) API commands such as Get, Put among others. Learn more about object storage, access and clouds at www.objectstoragecenter.com.
Some questions and comments
Is this the same as what was attempted almost a decade ago now with the T10 OSD drives?
Seagate claims no.
What is different this time around with Seagate doing a drive that to some may vaguely resemble the predecessor failed T10 OSD approach?
Industry support for object access and API development have progressed from an era of build it and they will come thinking, to now where the drives are adapted to support current cloud, object and key value software deployment.
Wont 1GbE ports be too slow vs. 12g or 6g or even 3g SAS and SATA ports?
Keep in mind those would be apples to oranges comparisons based on the protocols and types of activity being handled. Kinetic types of devices initially will be used for large data intensive applications where emphasis is on storing or retrieving large amounts of information, vs. low latency transactional. Also, keep in mind that one of the design premises is to keep cost low, spread the work over many nodes, devices to meet those goals while relying on server-side caching tools.
Does this mean that the HDD is actually software defined?
Seagate or other HDD manufactures have not yet noticed the software defined marketing (SDM) bandwagon. They could join the software defined fun (SDF) and talk about a software defined disk (SDD) or software defined HDD (SDHDD) however let us leave that alone for now.
The reality is that there is far more software that exists in a typical HDD than what is realized. Sure some of that is packaged inside ASICs (Application Specific Integrated Circuits) or running as firmware that can be updated. However, there is a lot of software running in a HDD hence the need for power yet energy-efficient processors found in those devices. On a drive per drive basis, you may see a Kinetic device consume more energy vs. other equivalence HDDs due to the increase in processing (compute) needed to run the extra software. However that also represents an off-load of some work from servers enabling them to be smaller or do more work.
Are these drives for everybody?
It depends on if your application, environment, platform and technology can leverage them or not. This means if you view the world only through what is new or emerging then these drives may be for all of those environments, while other environments will continue to leverage different drive options.
Does this mean that block storage access is now dead?
Not quite, after all there is still some block activity involved, it is just that they have been further abstracted. On the other hand, many applications, systems or environments still rely on block as well as file based access.
What about OpenStack, Ceph, Cassandra, Mongo, Hbase and other support?
Seagate has indicated those and others are targeted to be included in the ecosystem.
Seagate needs to be careful balancing their story and message with Kinetic to play to and support those focused on the new and emerging, while also addressing their bread and butter legacy markets. The balancing act is communicating options, flexibility to choose and adopt the right technology for the task without being scared of the future, or clinging to the past, not to mention throwing the baby out with the bath water in exchange for something new.
For those looking to do object storage systems, or cloud and other scale based solutions, Kinetic represents a new tool to do your due diligence and learn more about.
Ok, nuff said (for now)
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Cisco buys Whiptail continuing the SSD storage I/O flash cash cache dash
Cisco buys Whiptail continuing the Storage storage I/O flash cash cache dash
Congratulations to Virident for being bought today for $645 Million USD by Western Digital (WD). Virident a nand flash PCIe card startup vendor has been around for several years and in the last year or two has gain more industry awareness as a competitor to FusionIO among others.
There is a nand flash solid state devices (SSD) cash-dash occurring, not to mention fast cache dances that is occurring the IT and data infrastructure (e.g. storage and IO) sector specifically.
Why the nand flash SSD cash dash and cache dance?
Yesterday hard disk drive (HDD) vendor Western Digital (WD) bought Virident a nand flash PCIe Solid State Device (SSD) card vendor for $650M, and today networking and server vendor Cisco bought Whiptail a SSD based storage system startup for a little over $400M. Here is an industry trends perspective post that I did yesterday on WD and Virident.
Obviously this begs a couple of questions, some of which I raised in my post yesterday about WD, Virident, Seagate, FusionIO and others.
Questions include
Does this mean Cisco is getting ready to take on EMC, NetApp, HDS and its other storage partners who leverage the Cisco UCS server?
IMHO at least near term no more than they have in the past, nor any more than EMCs partnership with Lenovo indicates a shift in what is done with vBlocks. On the other hand, some partners or customers may be as nervous as a long-tailed cat next to a rocking chair (Google it if you don’t know what it means ;).
Is Cisco going to continue to offer Whiptail SSD storage solutions on a standalone basis, or pull them in as part of solutions similar to what it has done on other acquisitions?
IMHO this is one of the most fundamental questions and despite the press release and statements about this being a UCS focus, a clear sign of proof for Cisco is how they reign in (if they go that route) Whiptail from being sold as a general storage solution (with SSD) as opposed to being part of a solution bundle.
How will Cisco manage its relationship in a coopitition manner cooperating with the likes of EMC in the joint VCE initiative along with FlexPod partner NetApp among others? Again time will tell.
Also while most of the discussions about NetApp have been around the UCS based FlexPod business, there is the other side of the discussion which is what about NetApp E Series storage including the SSD based EF540 that competes with Whiptail (among others).
Many people may not realize how much DAS storage including fast SAS, high-capacity SAS and SATA or PCIe SSD cards Cisco sells as part of UCS solutions that are not vBlock, FlexPod or other partner systems.
NetApp and Cisco have partnerships that go beyond the FlexPod (UCS and ONTAP based FAS) so will be interesting to see what happens in that space (if anything). This is where Cisco and their UCS acquiring Whiptail is not that different from IBM buying TMS to complement their servers (and storage) while also partnering with other suppliers, same holds true for server vendors Dell, HP, IBM and Oracle among others.
Can Cisco articulate and convince their partners, customers, prospects and others that the whiptail acquisition is more about direct attached storage
(DAS) which includes both internal dedicated and external shared device?
Keep in mind that DAS does not have to mean Dumb A$$ Storage as some might have you believe.
Then there are the more popular questions of who is going to get bought next, what will NetApp, Dell, Seagate, Huawei and a few others do?
Oh, btw, funny how have not seen any of the pubs mention that Whiptail CEO Dan Crain is a former Brocadian (e.g. former Brocade CTO) who happens to be a Cisco competitor, just saying.
Congratulations to Dan and his crew and enjoy life at Cisco.
Stay tuned as the fall 2013 nand flash SSD cache dash and cash dance activities are well underway.
Ok, nuff said (for now).
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
WD buys nand flash SSD storage I/O cache vendor Virident
WD buys nand flash SSD storage I/O cache vendor Virident
Congratulations to Virident for being bought today for $645 Million USD by Western Digital (WD). Virident a nand flash PCIe card startup vendor has been around for several years and in the last year or two has gain more industry awareness as a competitor to FusionIO among others.
There is a nand flash solid state devices (SSD) cash dash occurring, not to mention fast cache dances that is occurring the IT and data infrastructure (e.g. storage and IO) sector specifically.
Why the nand flash SSD cash dash and cache dance?
Here is a piece that I did today over at InfoStor on a related theme that sets the basis of why the nand flash-based SSD market is popular for storage and as a cache. Hence there is a flash cash dash and by some dance for increased storage I/O performance.
Like the hard disk drive (HDD) industry before it which despite what some pundits and profits have declared (for years if not decades) as being dead (it is still alive), there were many startups, shutdowns, mergers and acquisitions along with some transformations. Granted solid-state memories is part of the presence and future being deployed in new and different ways.
The same thing has occurred in the nand flash-based SSD sector with LSI acquiring SANDforce, SANdisk picking up Pliant and Flashsoft among others. Then there is Western Digital (WD) that recently has danced with their cash as they dash to buy up all things flash including Stec (drives & PCIe cards), Velobit (cache software), Virident (PCIe cards), along with Arkeia (backup) and an investment in Skyera.
What about industry trends and market dynamics?
Meanwhile there have been some other changes with former industry darling and highflying stock post IPO FusionIO hitting as market reality and sudden CEO departure a few months ago. However after a few months of their stock being pummeled, today it bounced back perhaps as people now speculate who will buy FusionIO with WD picking up Virident. Note that one of Viridents OEM customers is EMC for their PCIe flash card XtremSF as are Micron and LSI.
Meanwhile Stec, also now own by WD was also EMCs original flash SSD drive supplier or what they refer to as a EFDs (Electronic Flash Devices), not to mention having also supplied HDDs to them (also keep in mind WD bought HGST a year or so back).
There are some early signs as well as their stock price jumping today which was probably oversold. Perhaps people are now speculating that maybe Seagate who had been an investor in Virident which was bought by WD for $645 million today might be in the market for somebody else? Alternatively, that perhaps WD didn’t see the value in a FusionIO, or willing to make big flash cache cash grabs dash of that size? Also note Seagate won a $630 million (and the next appeal was recently upheld) infringement lawsuit vs. WD (here and here).
Does that mean FusionIO could become Seagate’s target or that of NetApp, Oracle or somebody else with the cash and willingness to dash, grab a chunk of the nand flash, and cache market?
Likewise, there are the software I/O and caching tool vendors some of which are tied to VMware and virtual servers vs. others that are more flexible that are gaining popularity. What about the systems or solution appliances play, could that be in the hunt for a Seagate?
Anything is possible however IMHO that would be a risky move, one that many at Seagate probably still remember from their experiment with Xiotech, not to mention stepping on the toes of their major OEM customer partners.
Thus I would expect Seagate if they do anything would be more along the lines of a component type suppler meaning a FusionIO (yes they have Nexgen, however that could be easily dealt with), OCZ, perhaps even a LSI or Micron however some of those start to get rather expensive for a quick flash cache grab for some stock and cash.
Also, keep in mind that FusionIO in addition to having their PCIe flash cards also have the ioturbine software-caching tool that if you are not familiar with, IBM recently made an announcement of their Flash Cache Storage Accelerator (FCSA) that has an affiliation to guess who?
Closing comments (for now)
Some of the systems or solutions players will survive, perhaps even being acquired as XtremIO was by EMC, or file for IPO like Violin, or express their wish to IPO and or be bought such as all the others (e.g. Skyera, Whiptail, Pure, Solidfire, Cloudbyte, Nimbus, Nimble, Nutanix, Tegile, Kaminario, Greenbyte, and Simplivity among others).
Here’s the thing, those who really do know what is going to happen are not and probably cannot say, and those who are talking what will happen are like the rest of us, just speculating or providing perspectives or stirring the pot among other things.
So who will be next in the flash cache ssd cash dash dance?
Ok, nuff said (for now).
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
EMC New VNX MCx doing more storage I/O work vs. just being more
It’s not how much you have, its how storage I/O work gets done that matters
Following last weeks VMworld event in San Francisco where among other announcements including this one around Virtual SAN (VSAN) along with Software Defined Storage (SDS), EMC today made several announcements.
Today’s EMC announcements include:
- The new VNX MCx (Multi Core optimized) family of storage systems
- VSPEX proven infrastructure portfolio enhancements
- Availability of ViPR Software Defined Storage (SDS) platform (read more from earlier posts here, here and here)
- Statement of direction preview of Project Nile for elastic cloud storage platform
- XtremSW server cache software version 2.0 with enhanced management and support for VMware, AIX and Oracle RAC
Summary of the new EMC VNX MCx storage systems include:
- More processor cores, PCIe Gen 3 (faster bus), front-end and back-end IO ports, DRAM and flash cache (as well as drives)
- More 6Gb/s SAS back-end ports to use more storage devices (SAS and SATA flash SSD, fast HDD and high-capacity HDD)
- MCx – Multi-core optimized with software rewritten to make use of threads and resources vs. simply using more sockets and cores at higher clock rates
- Data Footprint Reduction (DFR) capabilities including block compression and dedupe, file dedupe and thin provisioning
- Virtual storage pools that include flash SSD, fast HDD and high-capacity HDD
- Block (iSCSI, FC and FCoE) and NAS file (NFS, pNFS, CIFS) front-end access with object access via Atmos Virtual Edition (VE) and ViPR
- Entry level pricing starting at below $10,000 USD
What is this MCx stuff, is it just more hardware?
While there is more hardware that can be used in different configurations, the key or core (pun intended) around MCx is that EMC has taken the time and invested in reworking the internal software of the VNX that has its roots going back to the Data General CLARRiON EMC acquired. This is similar to an effort EMC made a few years back when it overhauled what is now known as the VMAX from the Symmetric into the DMX. That effort expanded from a platform or processor port to re-architecting and software optimizing (rewrite portions) to leverage new and emerging hardware capabilities more effectively.
With MCx EMC is doing something similar in that core portions of the VNX software have been re-architected and written to take advantage of more threads and cores being available to do work more effectively. This is not all that different from what occurs (or should) with upper level applications that eventually get rewritten to leverage underlying new capabilities to do more work faster and leverage technologies in a more cost-effective way. MCx also leverages flash as a primary medium with data than being moved (256MB chunks) down into lower tiers of storage (SSD and HDD drives).
ENC VNX has had in the past FLASH Cache which enables SSD drives to be used as an extension of main cache as well as using drive targets. Thus while MCx can and does leverage more and faster core as would most any software, it is also able to leverage those cores and threads in a more effective way. After all, it’s not just how many processors, sockets, cores, threads, L1/L2 cache, DRAM, flash SSD and other resources, its how effective you use them. Also keep in mind that a bit of flash in the right place used effectively can go a long way vs. having a lot of cache in the wrong place or not used optimally that will end up costing a lot of cash.
Moving forward this means that EMC should be able to further refine and optimize other portions of the VNX software not yet updated to make further benefit of new hardware platforms and capabilities.
Does this mean EMC is catching up with newer vendors?
Similar to more of something is not always better, its how those items are used that matters, just because something is new does not mean its better or faster. That will manifest itself when they are demonstrated and performance results shown. However key is showing the performance across different workloads that have relevance to your needs and that convey metrics that matter with context.
Context matters including type and size of work being done, number of transactions, IOPs, files or videos served, pages processed or items rendered per unit of time, or response time and latency (aka wait or think time), along with others. Thus some newer systems may be faster on paper, powerpoint, WebEx, You tube or via some benchmarks, however what is the context and how do they compare to others on an apples to apples basis.
What are some other enhancements or features?
Leveraging of FAST VP (Fully Automated Storage Tiering for Virtual Pools) with improved MCx software
Increases the effectiveness of available hardware resources (processors, cores, DRAM, flash, drives, ports)
Active active LUNs accessible by both controllers as well as legacy AULA support
Data sheets and other material for the new VNX MCx storage systems can be found here, with software options and bundles here, and general speeds and feeds here.
Learn more here at the EMC VNX MCx storage system landing page and compare VNX systems here.
What does then new VNX MCx family look like?
Is VNX MCx all about supporting VMware?
Interesting that if you read behind the lines, listen closely to the conversations, ask the right questions you will realize that while VMware is an important workload or environment to support, it is not the only one targeted for VNX. Likewise if you listen and look beyond what is normally amplified in various conversations you will find that systems such as VNX are being deployed as back-end storage in cloud (public, private, hybrid) environments for use with technologies such as OpenStack or object based solutions (visit www.objectstoragecenter.com for more on object storage systems and access)..
There is a common myth that the cloud and service providers all use white box commodity hardware including JBOD for their systems which some do, however some are also using systems such as VNX among others. In some of these scenarios the VNX type systems are or will be deployed in large numbers essentially consolidating the functions of what had been done by even larger number of JBOD based systems. This is where some of you will have a DejaVu or back to the future moment from the mid 90s when there was an industry movement to combine all the DAS and JBOD into larger storage systems. Don’t worry if you are not yet reading about this trend in your favorite industry rag or analyst briefing notes, however ask or look around and you might be surprised at what is occurring, granted it might be another year or two before you read about it (just saying ;).
What that means is that VNX MCx is also well positioned for working with ViPR or Atmos Virtual Edition among other cloud and object storage stacks. VNX MCx is also well positioned for its new low-cost of entry for general purpose workloads and applications ranging from file sharing, email, web, database along with demanding high performance, low latency with large amounts of flash SSD. In addition to being used for general purpose storage, VNX MCx will also complement data protection solutions for backup/restore, BC, DR and archiving such as Data Domain, Avamar and Networker among others. Speaking of server virtualization, EMC also has tools for working with Hyper-V, Xen and KVM in addition to VMware.
If there is an all flash VNX MCx doesn’t that compete with XtremIO?
Yes there are all flash VNX MCx just as there have been all flash VNX before, however these will be positioned for different use case scenarios by EMC and their partners to avoid competing head to head with XtremIO. Thus EMC will need to be diligent in being very clear to its own sales and marketing forces as well as those of partners and customers of what to use when, where, why and how.
General thoughts and closing comments
The VNX MCx is a good set of enhancements by EMC and an example of how it’s not as important of how more you have, rather how you can use it to be more effective.
Ok, nuff said (fow now).
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Is more of something always better? Depends on what you are doing
Is more always better? Depends on what you are doing
As with many things it depends, however how about some of these?
Is more better for example (among others):
- Facebook likes
- Twitter followers or tweets (I’m @storageio btw)
- Google+ likes, follows and hangouts
- More smart phone apps
- LinkedIn connections
- People in your circle or community
- Photos or images per post or article
- People working with or for you
- Partners vs. doing more with those you have
- People you are working for or with
- Posts or longer posts with more in them
- IOPs or SSD and storage performance
- Domains under management and supported
- GB/TB/PB/EB supported or under management
- Mart-time jobs or a better full-time opportunity
- Metrics vs. those that matter with context
- Programmers to get job done (aka mythical man month)
- Lines of code per cost vs. more reliable and tested code per cost
- For free items and time spent managing them vs. more productivity for a nominal fee
- Meetings for planning on what to do vs. streamline and being more productive
- More sponsors or advertisers or underwriters vs. fewer yet more effective ones
- Space in your booth or stand at a trade show or conference vs. using what you have more effectively
- Copies of the same data vs. fewer yet more unique (not full though) copies of information
- Patents in your portfolio vs. more technology and solutions being delivered
- Processors, sockets, cores, threads vs. using them more effectively
- Ports and protocols vs. using them more effectively
Thus does more resources matter, or making more effective use of them?
For example more ports, protocols, processors, cores, sockets, threads, memory, cache, drives, bandwidth, people among other things is not always better, particular if those resources are not being used effectively.
Likewise don’t confuse effective with efficient often assumed to mean used.
For example a cache or memory may be 100% used (what some call efficient) yet only providing a 35% effective benefit (cache hit or miss) vs. cache turn (misses etc).
Throwing more processing power in terms of clock speed, or cores is one thing, kind of like throwing more server blades at a software problem vs. using those cores and sockets not to mention threads more effectively.
Good software will run better on fast hardware while enabling more to be done with the same or less.
Thus with better software or tools, more work can be done in an effective way leveraging those resources vs. simply throwing or applying more at the situation.
Hopefully you get the point, so no need to do more with this post (for now), if not, stay tuned and pay more attention around you.
Ok, nuff said, I need to go get more work done now.
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Fall 2013 Dutch cloud, virtual and storage I/O seminars
Fall 2013 Dutch cloud, virtual and storage I/O seminars
It is that time of the year again when StorageIO will be presenting a series of seminar workshops in the Netherlands on cloud, virtual and data storage networking technologies, trends along with best practice techniques.
StorageIO partners with the independent firm Brouwer Storage Consultancy of Holland who organizes these sessions. These sessions will also mark Brouwer Storage Consultancy celebrating ten years in business along with a long partnership with StorageIO.
Server Storage I/O Backup and Data Protection Cloud and Virtual
The fall 2013 Dutch seminars include coverage of storage I/O networking data protection and related trends topics for cloud and virtual environments. Click on the following links or images to view an abstract of the three sessions including what you will learn, who they are for, buzzwords, themes, topics and technologies that will covered.
Modernizing Data Protection | Storage Industry Trends | Storage Decision Making |
September 30 & October 1 | October 2 2013 | October 3 and 4 2013 |
All seminar workshop seminars are presented in a vendor technology neutral including (e.g. these are not vendor marketing sales presentations) providing independent perspectives on industry trends, who is doing what, benefits, caveats of various approaches to addressing data infrastructure and storage challenges. View posts about earlier events here and here.
As part of theme of being vendor and technology neutral, the workshop seminars are held off-site at hotel venues in Nijkerk Netherlands so no need to worry about the sales teams coming in to sell you something during the breaks or lunch which are provided. There are also opportunities throughout the workshops for engagement, discussion and interaction with other attendees that includes your peers from various commercial, government and service providers among others.
Learn more and register for these events by visiting the Brouwer Storage Consultancy website page (here) and calling them at +31-33-246-6825 or via email info@brouwerconsultancy.com.
View other upcoming and recent StorageIO activities including live in-person, online web and recorded activities on our events page here, as well as check out our commentary and industry trends perspectives in the news here.
Ok, nuff said, I’m already hungry for bitter ballen (see above)!
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
VMworld 2013 Vmware, server, storage I/O and networking update (Day 1)
Congratulations to VMware on 10 years of VMworld!
With the largest installment yet of a VMworld in terms of attendance, there were also many announcements today (e.g. Monday) and many more slated for out the week. Here are a synopsis of some of those announcements.
Software Defined Data Center (SDDC) and Software Defined Networks (SDN)
VMware made a series of announcements today that set the stage for many others. Not surprisingly, these involved SDDC, SDN, SDS, vSphere 5.5 and other management tool enhancements, or the other SDM (Software Defined Management).
Here is a synopsis of what was announced by VMware.
VMware NSX (SDN) combines Nicira NVPTM along with vCloud Network and Security
VMware Virtual SAN (VSAN) not to be confused with virtual storage appliances (VSAs)
VMware vCloud Suite 5.5
VMware vSphere 5.5 (includes support for new Intel Xeon and Atom processors)
VMware vSphere App HA
VMware vSphere Flash Read Cache software
VMware vSphere Big Data Extensions
VMware vCloud Automation Center
VMware vCloud
Note that while these were announced today, some will be in public beta soon and general availability over the next few months or quarters (learn more here including pricing and availability). More on these and other enhancements in future posts. However for now check out what Duncan Epping (@DuncanYB) of VMware has to say over at his Yellowbook site here, here and here.
Buzzword Bingo
Additional VMworld Software Defined Announcements
Dell did some announcements as well for cloud and virtual environments in support of VMware from networking to servers, hardware and software. With all the recent acquisitions by Dell including Quest where they picked up Foglight management tools, along with vRanger, Bakbone and others, Dell has amassed an interesting portfolio. On the hardware front, check out the VRTX shared server infrastructure, I want one for my VMware environment, now I just need to justify one (to myself). Speaking of Dell, if you are at VMworld on Tuesday August 27 around 1:30PM stop by the Dell booth where I will be presenting including announcing some new things (stay tuned for more on that soon).
HP had some announcements today. HP jumped into the SDDC and SDN with some Software Defined Marketing (SDM) and Software Defined Announcements (SDA) in addition to using the Unified Data Center theme. Today’s announcements by HP were focused more around SDN and VMware NSX along with the HP Virtual Application Networks SDN Controller and VMware networking.
NetApp (Both #1417) announced more integration between their Data ONTAP based solutions and VMware vSphere, Horizon Suite, vCenter, vCloud Automation Center and vCenter Log Insight under the them theme of SDDC and SDS. As part of the enhancement, NetApp announced Virtual Storage Console (VSC 5.0) for end-to-end storage management and software in VMware environments. In addition, integration with VMware vCenter Server 5.5. Not to be left out of the SSD flash dash NetApp also released a new V1.2 of their FlashAccel software for vSphere 5.0 and 5.1.
Cloud, Virtualization and DCIM
Here is one that you probably have not seen or heard much about elsewhere, which is Nlyte announcement of their V1.5 Virtualization Connector for Data Center Infrastructure Management (DCIM). Keep in mind that DCIM is more than facilities, power, and cooling related themes, particular in virtual data centers. Thus, some of the DCIM vendors, as well as others are moving into the converged DCIM space that spans server, storage, networking, hardware, software and facilities topics.
Interested in or want to know more about DCIM, and then check out these items:
Data Center Infrastructure Management (DCIM) and Infrastructure Resource Management (IRM)
Data Center Tools Can Streamline Computing Resources
Considerations for Asset Tracking and DCIM
Data Protection including Backup/Restore, BC, DR and Archiving
Quantum announced that Commvault has added support to use the Lattus object storage based solution as an archive target platform. You can learn more about object storage (access and architectures) here at www.objectstoragecenter.com .
PHD Virtual did a couple of data protection (backup/restore , BC, DR ) related announcements (here and here ). Speaking of backup/restore and data protection, if you are at VMworld on Tuesday August 27th around 1:30PM, stop by the Dell booth where I will be presenting, and stay tuned for more info on some things we are going to announce at that time.
In case you missed it, Imation who bought Nexsan earlier this year last week announced their new unified NST6000 series of storage systems. The NST6000 storage solutions support Fibre Channel (FC) and iSCSI for block along with NFS, CIFS/SMB and FTP for file access from virtual and physical servers.
Emulex announced some new 16Gb Fibre Channel (e.g. 16GFC) aka what Brocade wants you to refer to as Gen 5 converged and multi-port adapters. I wonder how many still remember or would rather forget how many ASIC and adapter gens from various vendors occurred just at 1Gb Fibre Channel?
Caching and flash SSD
Proximal announced V2.0 of AutoCache 2.0 with role based administration, multi-hypervisor support (a growing trend beyond just a VMware focus) and more vCenter/vSphere integration. This is on the heels of last week’s FusionIO powered IBM Flash Cache Storage Accelerator (FCSA ) announcement, along with others such as EMC , Infinio, Intel, NetApp, Pernix, SanDisk (Flashsoft) to name a few.
Mellanox (VMworld booth #2005), you know, the Infinaband folks who also have some Ethernet (which also includes Fibre Channel over Ethernet) technology did a series of announcements today with various PCIe nand flash SSD card vendors. The common theme with the various vendors including Micron (Booth #1635) and LSI is in support of VMware virtual servers using iSER or iSCSI over RDMA (Remote Direct Memory Access). RDMA or server to server direct memory access (what some of you might know as remote memory mapped IO or channel to channel C2C) enables very fast low server to server data movement such as in a VMware cluster. Check out Mellanox and their 40Gb Ethernet along with Infinaband among other solutions if you are into server, storage i/o and general networking, along with their partners. Need or want to learn more about networking with your servers and storage check out Cloud and Virtual Data Storage Networking and Resilient Storage Networking .
Rest assured there are many more announcements and updates to come this week, and in the weeks to follow…
Ok, nuff said (for now).
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Care to help Coraid with a Storage I/O Content Conversation?
Blog post – Can you help Coraid with a Storage I/O Content Conversation?
Over the past week or so have had many email conversations with the Coraid marketing/public relations (PR) folks who want to share some of their content unique or custom content with you.
Normally I (aka @StorageIO) does not accept unsolicited (placed) content (particular for product pitch/placements) from vendors or their VARs, PR, surrogates including third or fourth party placement firms. Granted StorageIOblog.com does have site sponsors , per our policies that is all that those are, advertisements with no more or less influence than for others. StorageIO does do commissioned or sponsored custom content including white papers, solution briefs among other things with applicable disclosures, retention of editorial tone and control.
Who is Coraid and what do they do?
However wanting to experiment with things, not to mention given Coraids persistence, let’s try something to see how it works.
Coraid for those who are not aware provides an alternative storage and I/O networking solution called ATA over Ethernet or AoE (here is a link to Coraids Analyst supplied content page). AoE enables servers with applicable software to use storage equipped with AoE technology (or via an applicable equipped appliance) to use Ethernet as an interconnect and transport. AoE is on the low-end an alternative to USB, Thunderbolt or direct attached SATA or SAS, along with switched or shared SAS (keep in mind SATA can plug into SAS, not vice versa).
In addition AoE is an alternative to the industry standard iSCSI (SCSI command set mapped onto IP) which can be found in various solutions including as a software stack. Another area where AoE is positioned by Coraid is as an alternative to Fibre Channel SCSI_FCP (FCP) and Fibre Channel over Ethernet (FCoE). Keep in mind that Coraid AoE is block based (granted they have other solutions) as opposed to NAS (file) such as NFS, CIFS/SMB/SAMBA, pNFS or HDFS among others and is using native Ethernet as opposed to being layered on top of iSCSI.
So here is the experiment
Since Coraid wanted to get their unique content placed either by them or via others, lets see what happens in the comments section here at StorageIOblog.com. The warning of course is keep it respectful, courteous and no bashing or disparaging comments about others (vendors, products, technology).
Thus the experiment is simple, lets see how the conversation evolves into the caveats, benefits, tradeoffs and experiences of those who have used or looked into the solution (pro or con) and why a particular opinion. If you have a perspective or opinion, no worries, however put it in context including if you are a Coraid employee, var, reseller, surrogate and likewise for those with other view (state who you are, your affiliation and other disclosure). Likewise if providing or supplying links to any content (white papers, videos, webinars) including via third-party provide applicable disclosures (e.g. it was sponsored and by whom etc.).
Disclosure
While I have mentioned or provided perspectives about them via different venues (online, print and in person) in the past, Coraid has never been a StorageIO client. Likewise this is not an endorsement for or against Coraid and their AoE or other solutions, simply an industry trends perspective.
Ok, nuff said (for now).
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved