CompTIA needs input for their Storage+ certification, can you help?

CompTIA needs input for their Storage+ certification, can you help?

The CompTIA folks are looking for some comments and feedback from those who are involved with data storage in various ways as part of planning for their upcoming enhancements to the Storage+ certification testing.

As a point of disclosure, I am member of the CompTIA Storage+ certification advisory committee (CAC), however I don’t get paid or receive any other type of renumeration for contributing my time to give them feedback and guidance other than a thank, Atta boy for giving back and playing it forward to help others in the IT community similar to what my predecessors did.

I have been asked to pass this along to others (e.g. you or who ever forwards it on to you).

Please take a few moments and feel free to share with others this link here to the survey for CompTIA Storage+.

What they are looking for is to validate the exam blueprint generated from a recent Job Task Analysis (JTA) process.

In other words, does the certification exam show real-world relevance to what you and your associates may be doing involved with data storage.

This is opposed to being aligned with those whose’s job it is to create test questions and may not understand what it is you the IT pro involved with storage does or does not do.

If you have ever taken a certification exam test and scratched your head or wondered out why some questions that seem to lack real-world relevance were included, vs. ones of practical on-the-job experience were missing, here’s your chance to give feedback.

Note that you will not be rewarded with an Amex or Amazon gift card, Starbucks or Dunkin Donuts certificates, free software download or some other incentive to play and win, however if you take the survey let me know and will be sure to tweet you an Atta boy or Atta girl! However they are giving away a free T-Shirt to every 10 survey takers.

Btw, if you really need something for free, send me a note (I’m not that difficult to find) as I have some free copies of Resilient Storage Networking (RSN): Designing Flexible Scalable Data Infrastructures (Elsevier) you simply pay shopping and handling. RSN can be used to help prepare you for various storage testing as well as other day-to-day activities.

CompTIA is looking for survey takers who have some hands-on experience or involved with data storage (e.g. can you spell SAN, NAS, Disk or SSD and work with them hands-on then you are a candidate ;).

Welcome to the CompTIA Storage+ Certification Job Task Analysis (JTA) Survey

  • Your input will help CompTIA evaluate which test objectives are most important to include in the CompTIA Storage+ Certification Exam
  • Your responses are completely confidential.
  • The results will only be viewed in the aggregate.
  • Here is what (and whom) CompTIA is looking for feedback from:

  • Has at least 12 to 18 months of experience with storage-related technologies.
  • Makes recommendations and decisions regarding storage configuration.
  • Facilitates data security and data integrity.
  • Supports a multiplatform and multiprotocol storage environment with little assistance.
  • Has basic knowledge of cloud technologies and object storage concepts.
  • As a small token of CompTIA appreciation for your participation, they will provide an official CompTIA T-shirt to every tenth (1 of every 10) person who completes this survey. Go here for official rules.

    Click here to complete the CompTIA Storage+ survey

    Contact CompTIA with any survey issues, research@comptia.org

    What say you, take a few minutes like I did and give some feedback, you will not be on the hook for anything, and if you do get spammed by the CompTIA folks, let me know and I in turn will spam them back for spamming you as well as me.

    Ok, nuff said

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go

    StorageIO industry trends and perspectives

    In case you missed it, VMware recently announced spending $1.05 billion USD acquiring startup Nicira for their virtualization and software technology that enables software defined networks (SDN). Also last week Oracle was in the news getting its hands slapped by for making misleading advertisement performance claims vs. IBM.

    On the heals of VMware buying Nicira for software defined networking (SDN) or what is also known as IO virtualization (IOV) and virtualized networking, Oracle is now claiming their own SDN capabilities with their announcement of intent to acquire Xsigo. Founded in 2004, Xsigo has a hardware platform combined with software to enable attachment of servers to different Fibre Channel (SAN) and Ethernet based (LAN) networks with their version of IOV.

    Now its Oracle who has announced that it will be acquiring IO, networking, virtualization hardware and software vendor Xsigo founded in 2004 for an undisclosed amount. Xsigo has made its name in the IO virtualization (IOV) and converged networking along with server and storage virtualization space over the past several years including partnerships with various vendors.

    Buzz word bingo

    Technology buzzwords and buzz terms can often be a gray area leaving plenty of room for marketers and PR folks to run with. Case in point AaaS, Big data, Cloud, Compliance, Green, IaaS, IOV, Orchestration, PaaS and Virtualization among other buzzword bingo or XaaS topics. Since Xsigo has been out front in messaging and industry awareness around IO networking convergence of Ethernet based Local Area Networks (LANs) and Fibre Channel (FC) based Storage Area Networks (SANs), along with embracing InfiniBand, it made sense for them to play to their strength which is IO virtualization (aka IOV).

    Too me and among others (here and here and here) it is interesting that Xsigo has not laid claims to being part of the software defined networking (SDN) movement or the affiliated OpenFlow networking initiatives as happens with Nicira (and Oracle for that matter). In the press release that the Oracle marketing and PR folks put out on a Monday morning, some of the media and press, both trade industry, financial and general news agency took the Oracle script hook line and sinker running with it.

    What was effective is how well many industry trade pubs and their analysts simply picked up the press release story and ran with it in the all too common race to see who can get the news or story out first, or before it actually happens in some cases.

    Image of media, news papers

    Too be clear, not all pubs jumped including some of those mentioned by Greg Knieriemen (aka @knieriemen) over at SpeakinginTech highlights. I know some who took the time to call, ask around, leverage their journalistic training to dig, research and find out what this really meant vs. simply taking and running with the script. An example of one of those calls that I had was with Beth Pariseu (aka @pariseautt) that you can read her story here and here.

    Interesting enough, the Xsigo marketers had not embraced the SDN term sticking with the more known (at least in some circles) VIO and VIO descriptions. What is also interesting is just last week Oracle marketing had their hands slapped by the Better Business Bureau (BBB) NAD after IBM complained about unfair performance based advertisements on ExaData.

    Oracle Exadata

    Hmm, I wonder if the SDN police or somebody else will lodge a similar complaint with the BBB on behalf of those doing SDN?

    Both Oracle and Xsigo along with other InfiniBand (and some Ethernet and PCIe) focused vendors are members of the Open Fabric initiative, not to be confused with the group working on OpenFlow.

    StorageIO industry trends and perspectives

    Here are some other things to think about:

    Oracle has a history of doing different acquisitions without disclosing terms, as well as doing them based on earn outs such as was the case with Pillar.

    Oracle use Ethernet in the servers and appliances as well as has been an adopter of InfiniBand primarily for node to node communication, however also for server to application.

    Oracle is also an investor in Mellanox the folks that make InfiniBand and Ethernet products.

    Oracle has built various stacks including ExaData (Database machine), Exalogic, Exalytics and Database Appliance in addition to their 7000 series of storage systems.

    Oracle has done earlier virtualization related acquisitions including Virtual Iron.

    Oracle has a reputation with some of their customers who love to hate them for various reasons.

    Oracle has a reputation of being aggressive, even by other market leader aggressive standards.

    Integrated solution stacks (aka stack wars) or what some remember as bundles continues and Oracle has many solutions.

    What will happen to Xsigo as you know it today (besides what the press releases are saying).

    While Xsigo was not a member of the Open Networking Forum (ONF), Oracle is.

    Xsigo is a member of the Open Fabric Alliance along with Oracle, Mellanox and others interested in servers, PCIe, InfiniBand, Ethernet, networking and storage.

    StorageIO industry trends and perspectives

    What’s my take?

    While there are similarities in that both Nicira and Xsigo are involved with IO Virtualization, what they are doing, how they are doing it, who they are doing it with along with where they can play vary.

    Not sure what Oracle paid however assuming that it was in the couple of million dollars or less, cash or combination of stock, both they and the investors as well as some of the employees, friends and family’s did ok.

    Oracle also gets some intellectual property that they can combine with other earlier acquisitions via Sun and Virtual Iron along with their investment in InfiniBand (also now Ethernet) vendor Mellanox

    Likewise, Oracle gets some extra technology that they can leverage in their various stacked or integrated (aka bundled) solutions for both virtual and physical environments.

    For Xsigo customers the good news is that you now know who will be buying the company, however and should be questions about the future beyond what is being said in press releases.

    Does this acquisition give Oracle a play in the software defined networking space like Nicira gives to VMware I would say no given their hardware dependency, however it does give Oracle some extra technology to play with.

    Likewise while important and a popular buzzword topic (e.g. SDN), since OpenFlow comes up in conversations, perhaps that should be more of the focus vs. if a solution is all software or hardware and software.

    StorageIO industry trends and perspectives

    I also find it entertaining how last week the Better Business Bureau (BBB) and NAD (National Advertising Division) slapped Oracles hands after IBM complaints of misleading performance claims about Oracle ExaData vs. IBM. The reason I find it entertaining is not that Oracle had its hands slapped or that IBM complained to the BBB, rather how the Oracle marketers and PR folks came up with a spin around what could be called a proprietary SDN (hmm, pSDN ?) story feed it to the press and media who then ran with it.

    Im not convinced that this is an all our launch of a war by Oracle vs. Cisco let alone any of the other networking vendors as some have speculated (makes for good headlines though). Instead Im seeing it as more of an opportunistic acquisition by Oracle most likely at a good middle of summer price. Now if Oracle really wanted to go to battle with Cisco (and others), then there are others to buy such as Brocade, Juniper, etc etc etc. However there are other opportunities for Oracle to be focused (or side tracked on right now).

    Oh, lets also see what Cisco has to say about all of this which should be interesting.

    Additional related links:
    Data Center I/O Bottlenecks Performance Issues and Impacts
    I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
    I/O Virtualization (IOV) Revisited
    Industry Trends and Perspectives: Converged Networking and IO Virtualization (IOV)
    The function of XaaS(X) Pick a letter
    What is the best kind of IO? The one you do not have to do
    Why FC and FCoE vendors get beat up over bandwidth?

    StorageIO industry trends and perspectives

    If you are interested in learning more about IOV, Xisgo, or are having trouble sleeping, click here, here, here, here, here, here, here, here, here, here, here, here, here, or here (I think that’s enough links for now ;).

    Ok, nuff said for now as I have probably requalified for being on the Oracle you know what list for not sticking to the story script, opps, excuse me, I mean press release message.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Modernizing data protection with certainty

    Speaking of and about modernizing data protection, back in June I was invited to be a keynote presenter on industry trends and perspectives at a series of five dinner events (Boston, Chicago, Palo Alto, Houston and New York City) sponsored by Quantum (that is a disclosure btw).

    backup, restore, BC, DR and archiving

    The theme of the dinner events was an engaging discussion around modernizing data protection with certainty along with clouds, virtualization and related topics. Quantum and one of their business partner resellers started the event with introductions followed by an interactive discussion by myself, followed by David Chappa (@davidchapa ) who ties the various themes with what Quantum is doing along with some of their customer success stories.

    Themes and examples for these events build on my book Cloud and Virtual Data Storage Networking including:

    • Rethinking how, when, where and why data is being protected
    • Big data, little data and big backup issues and techniques
    • Archive, backup modernization, compression, dedupe and storage tiering
    • Service level agreements (SLA) and service level objectives (SLO)
    • Recovery time objective (RTO) and recovery point objective (RPO)
    • Service alignment and balancing needs vs. wants, cost vs. risk
    • Protecting virtual, cloud and physical environments
    • Stretching your available budget to do more without compromise
    • People, processes, products and procedures

    Quantum is among other industry leaders with multiple technology and solution offerings for addressing different aspects of data footprint reduction and data protection modernization. These include for physical, virtual and cloud environments along with traditional tape, disk based, compression, dedupe, archive, big data, hardware, software and management tools. A diverse group of attendees have been at the different events including enterprise and SMB, public, private and government across different sectors.

    Following are links to some blog posts that covered first series of events along with some of the specific themes and discussion points from different cities:

    Via ITKE: The New Realities of Data Protection
    Via ITKE: Looking For Certainty In The Cloud
    Via ITKE: Success Stories in Data Protection: Cloud virtualization
    Via ITKE: Practical Solutions for Data Protection Challenges
    Via David Chappas blog

    If you missed attending any of the above events, more dates are being added in August and September including stops in Cleveland, Raleigh, Atlanta, Washington DC, San Diego, Connecticut and Philadelphia with more details here.

    Ok, nuff said for now, hope to see you at one of the upcoming events.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Cloud and Virtual Data Storage Networking

    For those who have read any of my previous posts, seen some of my articles, news letters, videos, pod casts, web casts or in person appearances you may have heard that I have a new book coming out this summer.

    Here in the northern hemisphere its summer (well technically the solstice is just around the corner) and in Minnesota the ice (from the winter) is off the lakes and rivers. Granted, there is some ice floating that fell out of coolers for keeping beverages cool. This means that it is also fishing (and catching) season on the Scenic St. Croix River.

    Karen of Arcola catches first fish of 2011 season, St. Croix river, stripe bassGreg showing his first catch of the 2011 season, St. Croix walleye aka Walter or Wanda

    FTC disclosures (and for fun): Karenofarcola is wearing a StorageIO baseball cap and Im wearing a cap from a vendor marketing person who sent several as they too enjoy fishing and boating. Funny thing about the cap, all of the river rats and fishing people think it is from the people who make rod reels instead of solutions that go around tape and disk reels. Note, if you feel compelled to send me baseball caps, send at least a pair so there is a backup, standby, spare or extra one for a guest. The mustang survival jacket that Im wearing with the Seadoo logo is something I bought myself. I did get a discount however since there was a Seadoo logo on it and I used to have Seadoo jet boats. Btw, that was some disclosure fun and humor!

    Ok, enough of the fun stuff, lets get back to the main theme of this post.

    My new book which is the third in a series of solo projects including Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier) and The Green and Virtual Data Center (CRC).

    While the official launch and general availability will be later in the summer, following are some links and related content to give you advance information about the new book.

    Cloud and Virtual Data Storage Networking

    Click on the above image which will take you to the CRC Press page where you can learn more including what the book is about, view a table of contents, see reviews and more. Also check out the video below to learn more as well as visit my main web site where you can learn about Cloud and Virtual Data Storage Networking, my other books and view (or listen to) related content such as white papers, solution briefs, articles, tips, web cast, pod cast as well as view the recent and upcoming events schedule.

    I also invite you to join Cloud and Virtual Data Storage Networking group

    You can also view the short video at dailymotion, metacage, blip.tv, veoh, flickr, and photobucket among other venues.

    If you are interested in being a reviewer, send a note to cvdsn@storageio.com with your name, blog or website and contact information including shipping address (sorry no PO boxes) plus telephone (or skype) number. Also indicate if you are a blogger, press/media, free lance writer, analyst, consultant, var, vendor, investor, IT professional or other.

    Watch for more news and information as we get closer to the formal launch and release, in the meantime, you can pre order your copy now at Amazon, CRC Press and other venues around the world.

    Ok, time to get back to work or go fishing, nuff said

    Cheers Gs

    Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

    EMC VPLEX: Virtual Storage Redefined or Respun?

    In a flurry of announcements that coincide with EMCworld occurring in Boston this week of May 10 2010 EMC officially unveiled the Virtual Storage vision initiative (aka twitter hash tag of #emcvs) and initial VPLEX product. The Virtual Storage initiative was virtually previewed back in March (See my previous post here along with one from Stu Miniman (twitter @stu) of EMC here or here) and according to EMC the VPLEX product was made generally available (GA) back in April.

    The Virtual Storage vision and associated announcements consisted of:

    • Virtual Storage vision – Big picture  initiative view of what and how to enable private clouds
    • VPLEX architecture – Big picture view of federated data storage management and access
    • First VPLEX based product – Local and campus (Metro to about 100km) solutions
    • Glimpses of how the architecture will evolve with future products and enhancements


    Figure 1: EMC Virtual Storage and Virtual Server Vision and Big Pictures

    The Big Picture
    The EMC Virtual Storage vision (Figure 1) is the foundation of a private IT cloud which should enable characteristics including transparency, agility, flexibility, efficient, always on, resiliency, security, on demand and scalable. Think of it this way, EMC wants to enable and facilitate for storage what is being done by server virtualization hypervisor vendors including VMware (which happens to be owned by EMC), Microsoft HyperV and Citrix/Xen among others. That is, break down the physical barriers or constraints around storage similar to how virtual servers release applications and their operating systems from being tied to a physical server.

    While the current focus of desktop, server and storage virtualization has been focused on consolidation and cost avoidance, the next big wave or phase is life beyond consolidation where the emphasis expands to agility, flexibility, ease of use, transparency, and portability (Figure 2). In the next phase which puts an emphasis around enablement and doing more with what you have while enhancing business agility focus extends from how much can be consolidated or the number of virtual machines per physical machine to that of using virtualization for flexibility, transparency (read more here and here or watch here).


    Figure 2: Virtual Storage Big Picture

    That same trend will be happening with storage where the emphasis also expands from how much data can be squeezed or consolidated onto a given device to that of enabling flexibility and agility for load balancing, BC/DR, technology upgrades, maintenance and other routine Infrastructure Resource Management (IRM) tasks.

    For EMC, achieving this vision (both directly for storage, and indirectly for servers via their VMware subsidiary) is via local and distributed (metro and wide area) federation management of physical resources to support virtual data center operations. EMC building blocks for delivering this vision including VPLEX, data and storage management federation across EMC and third party products, FAST (fully automated storage tiering), SSD, data protection and data footprint reduction and data protection management products among others.

    Buzzword bingo aside (e.g. LAN, SAN, MAN, WAN, Pots and Pans) along with Automation, DWDM, Asynchronous, BC, BE or Back End, Cache coherency, Cache consistency, Chargeback, Cluster, db loss, DCB, Director, Distributed, DLM or Distributed Lock Management, DR, Foe or Fibre Channel over Ethernet, FE or Front End, Federated, FAST, Fibre Channel, Grid, HyperV, Hypervisor, IRM or Infrastructure Resource Management, I/O redirection, I/O shipping, Latency, Look aside, Metadata, Metrics, Public/Private Cloud, Read ahead, Replication, SAS, Shipping off to Boston, SRA, SRM, SSD, Stale Reads, Storage virtualization, Synchronization, Synchronous, Tiering, Virtual storage, VMware and Write through among many other possible candidates the big picture here is about enabling flexibility, agility, ease of deployment and management along with boosting resource usage effectiveness and presumably productivity on a local, metro and future global basis.


    Figure 3: EMC Storage Federation and Enabling Technology Big Picture

    The VPLEX Big Picture
    Some of the tenants of the VPLEX architecture (Figure 3) include a scale out cluster or grid design for local and distributed (metro and wide area) access where you can start small and evolve as needed in a predictable and deterministic manner.


    Figure 4: Generic Virtual Storage (Local SAN and MAN/WAN) and where VPLEX fits

    The VPLEX architecture is targeted towards enabling next generation data centers including private clouds where ease and transparency of data movement, access and agility are essential. VPLEX sits atop existing EMC and third party storage as a virtualization layer between physical or virtual servers and in theory, other storage systems that rely on underlying block storage. For example in theory a NAS (NFS, CIFS, and AFS) gateway, CAS content archiving or Object based storage system or purpose specific database machine could sit between actual application servers and VPLEX enabling multiple layers of flexibility and agility for larger environments.

    At the heart of the architecture is an engine running a highly distributed data caching algorithm that uses an approach where a minimal amount of data is sent to other nodes or members in the VPLEX environment to reduce overhead and latency (in theory boosting performance). For data consistency and integrity, a distributed cache coherency model is employed to protect against stale reads and writes along with load balancing, resource sharing and failover for high availability. A VPLEX environment consists of a federated management view across multiple VPLEX clusters including the ability to create a stretch volume that is accessible across multiple VPLEX clusters (Figure 5).


    Figure 5: EMC VPLEX Big Picture


    Figure 6: EMC VPLEX Local with 1 to 4 Engines

    Each VPLEX local cluster (Figure 6) is made up of 1 to 4 engines (Figure 7) per rack with each engine consisting of two directors each having 64GByte of cache, localized compute Intel processors, 16 Front End (FE) and 16 Back End (BE) Fibre Channel ports configured in a high availability (HA). Communications between the directors and engines is Fibre Channel based. Meta data is moved between the directors and engines in 4K blocks to maintain consistency and coherency. Components are fully redundant and include phone home support.


    Figure 7: EMC VPLEX Engine with redundant directors

    VPLEX initially host servers supported include VMware, Cisco UCS, Windows, Solaris, IBM AIX, HPUX and Linux along with EMC PowerPath and Windows multipath management drivers. Local server clusters supported include Symantec VCS, Microsoft MSCS and Oracle RAC along with various volume mangers. SAN fabric connectivity supported includes Brocade and Cisco as well as Legacy McData based products.

    VPLEX also supports cache (Figure 8 ) write thru to preserve underlying array based functionality and performance with 8,000 total virtualized LUNs per system. Note that underlying LUNs can be aggregated or simply passed through the VPLEX. Storage that attaches to the BE Fibre Channel ports include EMC Symmetrix VMAX and DMX along with CLARiiON CX and CX4. Third party storage supported includes HDS9000 and USPV/VM along with IBM DS8000 and others to be added as they are certified. In theory given that the VPLEX presents block based storage to hosts; one would also expect that NAS, CAS or other object based gateways and servers that rely on underlying block storage to also be supported in the future.


    Figure 8: VPLEX Architecture and Distributed Cache Overview

    Functionality that can be performed between the cluster nodes and engines with VPLEX include data migration and workload movement across different physical storage systems or sites along with shared access with read caching on a local and distributed basis. LUNS can also be pooled across different vendors underlying storage solutions that also retain their native feature functionality via VPLEX write thru caching.

    Reads from various servers can be resolved by any node or engine that checks their cache tables (Figure 8 ) to determine where to resolve the actual I/O operation from. Data integrity checks are also maintained to prevent stale reads or write operations from occurring. Actual meta data communications between nodes is very small to enable state fullness while reducing overhead and maximizing performance. When a change to cache data occurs, meta information is sent to other nodes to maintain the distributed cache management index schema. Note that only pointers to where data and fresh cache entries reside are what is stored and communicated in the meta data via the distributed caching algorithm.


    Figure 9: EMC VPLEX Metro Today

    For metro deployments, two clusters (Figure 9) are utilized with distances supported up to about 100km or about 5ms of latency in a synchronous manner utilizing long distance Fibre Channel optics and transceivers including Dense Wave Division Multiplexing (DWDM) technologies (See Chapter 6: Metropolitan and Wide Area Storage Networking in Resilient Storage Networking (Elsevier) for additional details on LAN, MAN and WAN topics).

    Initially EMC is supporting local or Metro including Campus based VPLEX deployments requiring synchronous communications however asynchronous (WAN) Geo and Global based solutions are planned for the future (Figure 10).


    Figure 10: EMC VPLEX Future Wide Area and Global

    Online Workload Migration across Systems and Sites
    Online workload or data movement and migration across storage systems or sites is not new with solutions available from different vendors including Brocade, Cisco, Datacore, EMC, Fujitsu, HDS, HP, IBM, LSI and NetApp among others.

    For synchronization and data mobility operations such as a VMware Vmotion or Microsoft HyperV Live migration over distance, information is written to separate LUNs in different locations across what are known as stretch volumes to enable non disruptive workload relocation across different storage systems (arrays) from various vendors. Once synchronization is completed, the original source can be disconnected or taken offline for maintenance or other common IRM tasks. Note that at least two LUNs are required, or put another way, for every stretch volume, two LUNs are subtracted from the total number of available LUNs similar to how RAID 1 mirroring requires at least two disk drives.

    Unlike other approaches that for coherency and performance rely on either no cached data, or, extensive amounts of cached data along with subsequent overhead for maintaining state fullness (consistency and coherency) including avoiding stale reads or writes, VPLEX relies on a combination of distributed cache lookup tables along with pass thru access to underlying storage when or where needed. Consequently large amounts of data does not need to be cached as well as shipped between VPLEX devices to maintain data consistency, coherency or performance which should also help to keep costs affordable.

    Approach is not unique, it is the implementation
    Some storage virtualization solutions that have been software based running on an appliance or network switch as well as hardware system based have had a focus of emulating or providing competing capabilities with those of mid to high end storage systems. The premise has been to use lower cost, less feature enabled storage systems aggregated behind the appliance, switch or hardware based system to provide advanced data and storage management capabilities found in traditional higher end storage products.

    VPLEX while like any tool or technology could be and probably will be made to do other things than what it is intended for is really focused on, flexibility, transparency and agility as opposed to being used as a means of replacing underlying storage system functionality. What this means is that while there is data movement and migration capabilities including ability to synchronize data across sites or locations, VPLEX by itself is not a replacement for the underlying functionality present in both EMC and third party (e.g. HDS, HP, IBM, NetApp, Oracle/Sun or others) storage systems.

    This will make for some interesting discussions, debates and applies to oranges comparisons in particular with those vendors whose products are focused around replacing or providing functionality not found in underlying storage system products.

    In a nut shell summary, VPLEX and the Virtual Storage story (vision) is about enabling agility, resiliency, flexibility, data and resource mobility to simply IT Infrastructure Resource Management (IRM). One of the key themes of global storage federation is anywhere access on a local, metro, wide area and global basis across both EMC and heterogeneous third party vendor hardware.

    Lets Put it Together: When and Where to use a VPLEX
    While many storage virtualization solutions are focused around consolidation or pooling, similar to first wave server and desktop virtualization, the next general broad wave of virtualization is life beyond consolidation. That means expanding the focus of virtualization from consolidation, pooling or LUN aggregation to that of enabling transparency for agility, flexibility, data or system movement, technology refresh and other common time consuming IRM tasks.

    Some applications or usage scenarios in the future should include in addition to VMware Vmotion, Microsoft HypverV and Microsoft Clustering along with other host server closuring solutions.


    Figure 11: EMC VPLEX Usage Scenarios

    Thoughts and Industry Trends Perspectives:

    The following are various thoughts, comments, perspectives and questions pertaining to this and storage, virtualization and IT in general.

    Is this truly unique as is being claimed?

    Interestingly, the message Im hearing out of EMC is not the claim that this is unique, revolutionary or the industries first as is so often the case by vendors, rather that it is their implementation and ability to deploy on a broad perspective basis that is unique. Now granted you will probably hear as is often the case with any vendor or fan boy/fan girl spins of it being unique and Im sure this will also serve up plenty of fodder for mudslinging in the blogsphere, YouTube galleries, twitter land and beyond.

    What is the DejaVu factor here?

    For some it will be nonexistent, yet for others there is certainly a DejaVu depending on your experience or what you have seen and heard in the past. In some ways this is the manifestation of many vision and initiatives from the late 90s and early 2000s when storage virtualization or virtual storage in an open context jumped into the limelight coinciding with SAN activity. There have been products rolled out along with proof of concept technology demonstrators, some of which are still in the market, others including companies have fallen by the way side for a variety of reasons.

    Consequently if you were part of or read or listened to any of the discussions and initiatives from Brocade (Rhapsody), Cisco (SVC, VxVM and others), INRANGE (Tempest) or its successor CNT UMD not to mention IBM SVC, StorAge (now LSI), Incipient (now part of Texas Memory) or Troika among others you should have some DejaVu.

    I guess that also begs the question of what is VPLEX, in band, out of band or hybrid fast path control path? From what I have seen it appears to be a fast path approach combined with distributed caching as opposed to a cache centric inband approaches such as IBM SVC (either on a server or as was tried on the Cisco special service blade) among others.

    Likewise if you are familiar with IBM Mainframe GDPS or even EMC GDDR as well as OpenVMS Local and Metro clusters with distributed lock management you should also have DejaVu. Similarly if you had looked at or are familiar with any of the YottaYotta products or presentations, this should also be familiar as EMC acquired the assets of that now defunct company.

    Is this a way for EMC to sell more hardware along with software products?

    By removing barriers enabling IT staffs to support more data on more storage in a denser and more agile footprint the answer should be yes, something that we may see other vendors emulate, or, make noise about what they can or have been doing already.

    How is this virtual storage spin different from the storage virtualization story?

    That all depends on your view or definition as well as belief systems and preferences for what is or what is not virtual storage vs. storage virtualization. For some who believe that storage virtualization is only virtualization if and only if it involves software running on some hardware appliance or vendors storage system for aggregation and common functionality than you probably wont see this as virtual storage let alone storage virtualization. However for others, it will be confusing hence EMC introducing terms such as federation and avoiding terms including grid to minimize confusion yet play off of cloud crowd commotion.

    Is VPLEX a replacement for storage system based tiering and replication?

    I do not believe so and even though some vendors are making claims that tiered storage is dead, just like some vendors declared a couple of years ago that disk drives were going to be dead this year at the hands of SSD, neither has come to life so to speak pun intended. What this means for VPLEX is that it leverages underlying automated or manual tiering found in storage systems such as EMC FAST enabled or similar policy and manual functions in third party products.

    What VPLEX brings to the table is the ability to transparently present a LUN or volume locally or over distance with shared access while maintaining cache and data coherency. This means that if a LUN or volume moves the applications or file system or volume managers expecting to access that storage will not be surprised, panic or encounter failover problems. Of course there will be plenty of details to be dug into and seen how it all actually works as is the case with any new technology.

    Who is this for?

    I see this as for environments that need flexibility and agility across multiple storage systems either from one or multiple vendors on a local or metro or wide area basis. This is for those environments that need ability to move workloads, applications and data between different storage systems and sites for maintenance, upgrades, technology refresh, BC/DR, load balancing or other IRM functions similar to how they would use virtual server migration such as VMotion or Live migration among others.

    Do VPLEX and Virtual Storage eliminate need for Storage System functionality?

    I see some storage virtualization solutions or appliances that have a focus of replacing underlying storage system functionality instead of coexisting or complementing. A way to test for this approach is to listen or read if the vendor or provider says anything along the lines of eliminating vendor lock in or control of the underlying storage system. That can be a sign of the golden rule of virtualization of whoever controls the virtualization functionality (at the server hypervisor or storage) controls the gold! This is why on the server side of things we are starting to see tiered hypervisors similar to tiered servers and storage where mixed hypervisors are being used for different purposes. Will we see tiered storage hypervisors or virtual storage solutions the answer could be perhaps or it depends.

    Was Invista a failure not going into production and this a second attempt at virtualization?

    There is a popular myth in the industry that Invista never saw the light of day outside of trade show expo or other demos however the reality is that there are actual customer deployments. Invista unlike other storage virtualization products had a different focus which was that around enabling agility and flexibility for common IRM tasks, similar the expanded focus of VPLEX. Consequently Invista has often been in apples to oranges comparison with other virtualization appliances that have as focus pooling along with other functions or in some cases serving as an appliance based storage system.

    The focus around Invista and usage by those customers who have deployed it that I have talked with is around enabling agility for maintenance, facilitating upgrades, moves or reconfiguration and other common IRM tasks vs using it for pooling of storage for consolidation purposes. Thus I see VPLEX extending on the vision of Invista in a role of complimenting and leveraging underlying storage system functionality instead of trying to replace those capabilities with that of the storage virtualizer.

    Is this a replacement for EMC Invista?

    According to EMC the answer is no and that customers using Invista (Yes, there are customers that I have actually talked to) will continue to be supported. However I suspect that over time Invista will either become a low end entry for VPLEX, or, an entry level VPLEX solution will appear sometime in the future.

    How does this stack up or compare with what others are doing?

    If you are looking to compare to cache centric platforms such as IBMs SVC that adds extensive functionality and capabilities within the storage virtualization framework this is an apples to oranges comparison. VPLEX is providing cache pointers on a local and global basis functioning in a compliment to underlying storage system model where SVC caches at the specific cluster basis and enhancing functionality of underlying storage system. Rest assured there will be other apples to oranges comparisons made between these platforms.

    How will this be priced?

    When I asked EMC about pricing, they would not commit to a specific price prior to the announcement other than indicating that there will be options for on demand or consumption (e.g. cloud pricing) as well as pricing per engine capacity as well as subscription models (pay as you go).

    What is the overhead of VPLEX?

    While EMC runs various workload simulations (including benchmarks) internally as well as some publicly (e.g. Microsoft ESRP among others) they have been opposed to some storage simulation benchmarks such as SPC. The EMC opposition to simulations such as SPC have been varied however this could be a good and interesting opportunity for them to silence the industry (including myself) who continue ask them (along with a couple of other vendors including IBM and their XIV) when they will release public results.

    What the interesting opportunity I think is for EMC is that they do not even have to benchmark one of their own storage systems such as a CLARiiON or VMAX, instead simply show the performance of some third party product that already is tested on the SPC website and then a submission with that product running attached to a VPLEX.

    If the performance or low latency forecasts are as good as they have been described, EMC can accomplish a couple of things by:

    • Demonstrating the low latency and minimal to no overhead of VPLEX
    • Show VPLEX with a third party product comparing latency before and after
    • Provide a comparison to other virtualization platforms including IBM SVC

    As for EMC submitting a VMAX or CLARiiON SPC test in general, Im not going to hold my breath for that, instead, will continue to look at the other public workload tests such as ESRP.

    Additional related reading material and links:

    Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier)
    Chapter 3: Networking Your Storage
    Chapter 4: Storage and IO Networking
    Chapter 6: Metropolitan and Wide Area Storage Networking
    Chapter 11: Storage Management
    Chapter 16: Metropolitan and Wide Area Examples

    The Green and Virtual Data Center (CRC)
    Chapter 3: (see also here) What Defines a Next-Generation and Virtual Data Center
    Chapter 4: IT Infrastructure Resource Management (IRM)
    Chapter 5: Measurement, Metrics, and Management of IT Resources
    Chapter 7: Server: Physical, Virtual, and Software
    Chapter 9: Networking with your Servers and Storage

    Also see these:

    Virtual Storage and Social Media: What did EMC not Announce?
    Server and Storage Virtualization – Life beyond Consolidation
    Should Everything Be Virtualized?
    Was today the proverbial day that he!! Froze over?
    Moving Beyond the Benchmark Brouhaha

    Closing comments (For now):
    As with any new vision, initiative, architecture and initial product there will be plenty of questions to ask, items to investigate, early adopter customers or users to talk with and determine what is real, what is future, what is usable and practical along with what is nice to have. Likewise there will be plenty of mud ball throwing and slinging between competitors, fans and foes which for those who enjoy watching or reading those you should be well entertained.

    In general, the EMC vision and story builds on and presumably delivers on past industry hype, buzz and vision with solutions that can be put into environments as productivity tool that works for the customer, instead of the customer working for the tool.

    Remember the golden rule of virtualization which is in play here is that whoever controls the virtualization or associated management controls the gold. Likewise keep in mind that aggregation can cause aggravation. So do not be scared, however look before you leap meaning do your homework and due diligence with appropriate levels of expectations, aligning applicable technology to the task at hand.

    Also, if you have seen or experienced something in the past, you are more likely to have DejaVu as opposed to seeing things as revolutionary. However it is also important to leverage lessons learned for future success. YottaYotta was a lot of NaddaNadda, lets see if EMC can leverage their past experiences to make this a LottaLotta.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Virtual Storage and Social Media: What did EMC not Announce?

    Synopsis: EMC made a vision statement in a recent multimedia briefing that has a social networking angle as well as storage virtualization, virtual storage, public and private clouds.

    Basically EMC provided a vision preview of in a social media networking friendly manner of a vision being refereed to initially as EMC Virtual Storage (aka twitter hash tag #emcvs) which of course sounds similar to a pharmacy chain.

    The vision includes stirring up the industry with a new discussion around virtual storage compared to the decade old coverage of storage virtualization.

    The underlying theme of this vision is similar to that of virtual serves vs. server virtualization including the ability to move servers around, so to should there be the ability to move data around more freely on a local or global basis and in real or near real time. In other words, breaking the decades long affinity that has existed between data storage and the data that exists on it (Figure 1). Buzzword bingo themes include federated storage, virtual storage, public and private cloud along with global cache coherency among others.


    Figure 1: EMC Virtual Storage (EMCVS) Vision

    The rest of the story

    On Thursday March 11th 2010 Pat Gelsinger (EMC President and COO, Information Infrastructure Products) held an interactive briefing with the global analyst community pertaining to future EMC trajectory or visions. One of the interesting things about this session was that it was not unique to industry analysts nor was it under NDA.

    For example, here is a link that if still active, should provide access to the briefing material.

    The vision being talked about include those that EMC has talked about in the past such as virtualized data centers, or, putting a spin on the phrase data center virtualization, along with public and private clouds as well as  infrastructure  resource management virtualization (Figure 2):


    Figure 2: Public and Private Clouds along with Virtual Data Centers

    Figure 2 is a fairly common slide used in many EMC discussions positing public and private clouds along with virtualized data centers.


    Figure 3: Tenants of the EMC Virtual Storage (EMCVS) vision


    Figure 4: Enabling mobile data, breaking data and storage affinity


    Figure 5: Enabling teleporting and virtual storage

    Thus setting up the story for the need and benefit of distributed cache coherency, similar to distributed lock management (DLM) used on local and wide area clustered file systems for maintain data integrity.


    Figure 6: Leveraging distributed cache coherency

    This discussion around distributed cache coherency should ring Dejavu of IBM GDPS (Global Dispersed Parallel Sysplex) for Mainframe, OpenVMS distributed lock management for VAX and Alpha clusters, Oracle RAC, or other parallel and clustered file systems among others. Likewise for those familiar with technology from Yotta Yotta, this should also ring familiar.

    However while many are jumping on the Yotta Yotta familiarity bandwagon given comments made by Pat Gelsinger, something that came to mind is what about EMC GDDR? Do not worry if that is an acronym or product you are not up on as an EMC follower as it stands for EMC Geographically Dispersed Disaster (GDDR) solution that is an alternative to IBMs proprietary GDPS. Perhaps there is none, perhaps this is some, however what role if any including lessons learned will come from EMCs experience with GDDR not to mention other clustered file systems?


    Figure 7: The EMC vision as presented

    One of the interesting things about the vision announcement and perhaps part of floating it out for discussion was a comment made by Pat Gelsinger. That comment was about enabling the wild Wild West for IT, something that perhaps one generation might enjoy, however a notion another would soon forget. Im sure the EMC marke3ting team including their new chief marketing officer (CMO) Jeremy Burton can fine tune with time.
     

    More on the social networking and non NDA angle

    As is often the case with many other vendors, these types of customer, partner, analyst or media briefings (either online or in person) are under some form of NDA or embargo as they contain forward looking, yet to be announced products, solutions, technologies or other business initiatives. Note, these types of NDA discussions are not typically the same as those that portray or pretend to be NDA in order to sound more important a few days before an announcement that has already been leaked to get extra coverage or what are also known as media embargos.

    After some amount of time, usually the information is formerly made public that was covered in advanced briefings, along with additional details. Sometimes material covered under NDA is done so in advanced such that third parties can prepare reports, deep dive analysis or assessment and other content that is made available at announcement or shortly there. The material is often prepared partners, vars, media, analysts, consultants, customers or others outside of the announcing company via different venues ranging from print, online columns, blogs, tweets videos and more.

    Lately there has been some confusion in the broader IT as well as other industries as to where and how to classify bloggers, tweeters or other social media practionier. After all, is a blogger an analyst, journalist, free lance writer, advisor, vendor, consultant, customer, var, investor, hobbyist, competitor not to mention how does information get feed to them?

    Likewise, NDAs and embargo have joined the list of fodder topics that some do not like for various reasons yet like to complain about for others. There is a time and place for real NDAs that cover and address material, discussions and other information that should not be shared. However all to often NDAs get watered down particularly on the press release games where a vendor or public relations firm (PR) will dangle an announcement briefing a couple of days or perhaps a week or two prior to an announcement under the guise that it not be disclosed prior to formal announcement.

    Where these NDAs get tricky is that often they are honored by some and ignored by others, thus, those who honor the agreement get left behind by those who break the story. Personally I do not mind real NDA that are tied to real confidential material, discussion or other information that needs to be kept under wraps for various reasons. However the value or issues of NDA is whole different discussion, for now, lets get back to what EMC did not announce in their recent non-NDA briefing.

    Different organizations are addressing social media in various ways, some ignoring it, others embracing it regardless of what it is. EMC is an example of a vendor who has embraced social networking and social media along with traditional means of developing and maintaining relations with the media (media or press relations), customers, partners, vars, consultants, investors (e.g. investor relations) as well as analysts (analyst relations).

    For example, EMC works with analysts in traditional ways as they do with the media and other groups, however they also recognize that while some analysts (or media or investors or partners or customers or vars etc) blog and tweet (among other social networking mediums), not all do (as is also the case with media, customers, vars and so forth). Likewise EMC from a social media and networking perspective does not appear to define audiences based on the medium or tool that they use, rather, in a matrix or multi dimensional approach.

    That is, an analyst with a blog is a blogger, a var or independent consultant with a blog is a blogger, or a media person including free lance writers, journalist, reporters or publisher with a blog is a blogger as are vars, advisors, partners and competitors with blogs also treated as bloggers.



    Some of the 2009 EMC Bloggers Lounge Visitors

    Thus at their EMCworld event, admission to the bloggers lounge is as simple and non exclusive as having a blog to join regardless of what your role or usage of a blog happens to be. On the other hand, information is communicated via different channels such as for traditional press via public relations folks, investors through investors relations, analysts via analyst relations, partners and customers through their venues and so forth.

    When you think about it, makes sense as after all, EMC sells and attaches storage to mainframes, open systems Windows, UNIX, Linux as well as virtual servers that use different tools, protocols, languages and points of interest. Thus it should not be surprising that their approach to communicating with different audiences leverage various mediums for diverse messages at multiple points in time.

     

    What does all of this social media discussion have to do with the March 11 EMC event?

    In my opinion, this was an experiment of sorts of EMC to test the waters by floating a new vision to their traditional  pre brief audience in advance of talking with media prior to an actual announcement.

    That is, EMC did not announce a new product, technology, initiative, business alliance or customer event, rather a vision and trajectory or signaling what they may be doing in the future.

    How this ties to social media and networking is that rather than being an event only for those media, bloggers, tweeters, customers, consultants, vars, free lancers, partners or others who agreed to do so under NDA, EMC used the venue as an advance sounding board of sorts.

    That is, by sticking to broad vision vs. propriety and confidential or sensitive topics, the discussion has been put out in advance in the open to stimulate discussion in traditional reports, articles, columns or related venues not to mention in temporal real time via twitter not to mention via blogs and beyond.

    Does this mean EMC will be moving away from NDAs anytime soon? I do not think so as there is still very much a need for advanced (and not a couple of weeks prior to announcement) types of discussion around sensitive information. For example with the trajectory or visionary discussion last week by EMC, the short presentation and discussion, limited slides prompt more questions than they address.

    Perhaps what we are seeing is a new approach or technique of how organizations can use and bring social networking mediums into the mainstream business process as opposed to being perceived as niche or experimental mediums.

    The reason I think it was an experiment is that EMC practices both traditional analyst/media relations along with emerging social media networking relations that includes practioners that span both audiences. For some the social media bloggers and tweeters are a different audience than traditional media, writers, consultants or analysts, that is, they are a separate and unique audience.

    Thus, it is in my opinion and like human knees, elbows, feet, hands, ears as well as, well, you get the picture I think that there are many different views or thoughts not to mention interpretations of social media, social networking, blogging, analysts, consultants, advisors, media or press, customers, partners, and so on with diverse roles, functions and needs.

    Where this comes back to the topic of last weeks discussion is that of storage virtualization vs. virtual storage. Rest assured in the time since the EMC briefing and certainly in the weeks or months to come, there will be penalty of knees, elbows, hands and other body parts flying and signaling what is a particular view or definition of storage virtualization vs. virtual storage.

    Of course, some of these will be more entertaining than others ranging from well rehearsed, in some cases over the past decade or more to new and perhaps even revolutionary ones of what is and what is not storage virtualization vs. virtual storage, let alone cloud vs. cluster vs. grid vs. federated and beyond.

     

    Additional Comments and thoughts

    In general, I like the trajectory vision EMC is rolling out even if it causes confusion between what is virtual storage vs. storage virtualization, after all, we have been hearing about storage virtualization for over a decade now if not longer. Likewise, there has been plenty of talk about public clouds so it is refreshing to see more discussion and less cloud ware or cloud marketecture and how to actually leverage what you have to adopt private cloud practices.

    I suspect that as the EMC competition starts to hear or piece together what they think this vision is or is not, we should also start to hear some interesting stories, spins, counter pitches, debates, twitter fights, blog slams and YouTube videos, all of which also happen to consume more storage.

    I also like what EMC is doing with social media and networking as a means or medium for building and maintain relationships as well as for information exchange complimenting traditional means and mediums.  

    In other words, EMC is succeeding with social networking by not using it just as another megaphone to talk at or over people, rather, as a means to engage, to get to know, to challenge, to exchange regardless of if you are a so called independent blogger, twitter, analyst, medial, constant, customer, var, investor, partner among others.

    If you are not already doing so, here are some EMC folks who actively participate in two way dialogues across different areas with @lendevanna helping to facilitate and leverage the masses of various people and subject matter experts including @chuckhollis @c_weil @cxi @davegraham @gminks @mike_fishman @stevetodd @storageanarchy @storagezilla @Stu and @vcto among many others.

    Note that for you non twitter types, the previous are twitter handles (names or addresses) that can be accessed by putting https://twitter.com in place of the @ sign. For example @storageio = https://twitter.com/storageio

     

    Additional Comments and thoughts:

    Some comments and thoughts among others that I posted via twitter last week during the briefing event:

    Here are some twitter comments that I posted last week during the event with hash tag #emcvs:

    Is what was presented on the #emcvs #it #storage #virtualization call NDA material = Negative
    Is what was presented on the #emcvs #it #storage #virtualization call a product announcement = NOpe
    Is what was presented on the #emcvs #it #storage #virtualization call a statement of direction = Kind of
    Is what was presented on the #emcvs #it #storage #virtualization call a hint of future functionality = probably
    Is what was presented on the #emcvs #it #storage #virtualization call going to be shared with general public = R U reading this?
    Is what was presented on the #emcvs #it #storage #virtualization call going to be discussed further = Yup
    Is what was presented on the #emcvs #it #storage #virtualization call going to confuse the industry = Maybe
    Is what was presented on the #emcvs #it #storage #virtualization call going to confuse customers = Depends on story teller
    Is what was presented on the #emcvs #it #storage #virtualization call going to confuse competition = probably
    Is what was presented on the #emcvs #it #storage #virtualization call going to provide fodder/fuel for bloggers = Yup
    Anything else to add about #emcvs #it #storage #virtualization call today = Stay tuned, watch and listen for more!

    Some additional questions and my perspectives on those include:

    • What did EMC announce? Nothing, it was not an announcement; it was a statement of vision.
    • Why did EMC hold a briefing without an NDA and yet nothing was announced? It is my opinion that EMC has a vision that they want to float an idea or direction, thus, sharing a vision to get discussions going without actually announcing a specific product or technology.
    • Is this going to be a repackaged version of the Invista storage virtualization platform? I do not believe so.
    • Is this going to be a repackaged version of the intellectual property (IP) assets that EMC picked up from the defunct startup called Yotta Yotta? Given some references to, along with what some of the themes and discussions center around, it is my guess that there is some Yotta Yotta IP along with other technologies that may be part of any future possible solution.
    • Who or what is YottaYotta? They were a late dot com startup founded in 2000 that went through various incarnations and value propositions with some solutions that shipped. Some of the late era IP included distributed cache coherency and distance enablement of large scale federated storage on a global basis.
    • Can the Yotta Yotta (or here) technology really scale? That remains to be seen, Yotta Yotta had some interesting demos, proof of concept, early adopters and big plans, however they also amounted to Nada Nada, perhaps EMC can make a Lotta Lotta out of it!

     

    Other questions are still waiting for answers including among others:

    • Will EMC Virtual Storage (aka emcvs) become a common cure for typical IT infrastructure ailments?
    • Will this restart the debate around the golden rule of virtualization being whoever controls the virtualization controls the gold and thus vendors lock in?
    • Will this be a members only vision where only certain partners can participate?
    • What will other competitors respond with, technology, and marketecture, FUD or something else?
    • What are the specific details of when, where and how the vision is implemented?
    • What will all of this cost, will it work with existing products or is a forklift upgrade needed?
    • Has EMC bitten off more than they can chew or deliver on or is Pat Gelsinger and his crew racing down a mountain and out in front of their skis, or, is this brilliance beyond what we mere mortals can yet comprehend?
    • Can global data cache coherency really be deployed with data integrity on a global and large scale without negatively impacting performance?
    • Can EMC make Lotta Lotta with this vision?

     

    Here is what some of the EMC bloggers have had to say so far:

    Chuck Hollis aka @chuckhollis had this to say

    Stuart Miniman aka @stu had this to say

     

    Summing it up for now

    Lets see how the rest of the industry responds to this as the vision rolls out and perhaps sooner vs. later becomes technology that gets deployed and used.

    Im skeptical until more details are understood, however I also like it and intrigued by it if it can actually jump from Yotta Yotta slide ware to Lotta Lotta deployments.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Technology Tiering, Servers Storage and Snow Removal

    Granted it is winter in the northern hemisphere and thus snow storms should not be a surprise.

    However between December 2009 and early 2010, there has been plenty of record activity from in the U.K. (or here), to the U.S. east coast including New York, Boston and Washington DC, across the midwest and out to California, it made for a white christmas and SANta fun along with snow fun in general in the new year.

    2010 Snow Storm via www.star-telegram.com

    What does this have to do with Information Factories aka IT resources including public or private clouds, facilities, server, storage, networking along with data management let alone tiering?

    What does this have to do with tiered snow removal, or even snow fun?

    Simple, different tools are needed for addressing various types of snow from wet and heavy to light powdery or dustings to deep downfalls. Likewise, there are different types of servers, storage, data networks along with operating systems, management tools and even hyper visors to deal with various application needs or requirements.

    First, lets look at tiered IT resources (servers, storage, networks, facilities, data protection and hyper visors) to meet various efficiency, optimization and service level needs.

    Do you have tiered IT resources?

    Let me rephrase that question to do you have different types of servers with various performance, availability, connectivity and software that support various applications and cost levels?

    Thus the whole notion of tiered IT resources is to be abe to have different resources that can be aligned to the task at hand in order to meet performance, availability, capacity, energy along with economic along with service level agreement (SLA) requirements.

    Computers or servers are targeted for different markets including Small Office Home Office (SOHO), Small Medium Business (SMB), Small Medium Enterprise (SME) and ultra large scale or extreme scaling, including high performance super computing. Servers are also positioned for different price bands and deployment scenarios.

    General categories of tiered servers and computers include:

    • Laptops, desktops and workstations
    • Small floor standing towers or rack mounted 1U and 2U servers
    • Medium sizes floor standing towers or larger rack mounted servers
    • Blade Centers and Blade Servers
    • Large size floor standing servers, including mainframes
    • Specialized fault tolerant, rugged and embedded processing or real time servers

    Servers have different names email server, database server, application server, web server, and video or file server, network server, security server, backup server or storage server associated with them depending on their use. In each of the previous examples, what defines the type of server is the type of software is being used to deliver a type of service. Sometimes the term appliance will be used for a server; this is indicative of the type of service the combined hardware and software solution are providing. For example, the same physical server running different software could be a general purpose applications server, a database server running for example Oracle, IBM, Microsoft or Teradata among other databases, an email server or a storage server.

    This can lead to confusion when looking at servers in that a server may be able to support different types of workloads thus it should be considered a server, storage, networking or application platform. It depends on the type of software being used on the server. If, for example, storage software in the form a clustered and parallel file system is installed on a server to create highly scalable network attached storage (NAS) or cloud based storage service solution, then the server is a storage server. If the server has a general purpose operating system such as Microsoft Windows, Linux or UNIX and a database on it, it is a database server.

    While not technically a type of server, some manufacturers use the term tin wrapped software in an attempt to not be classified as an appliance, server or hardware vendor but want their software to be positioned more as a turnkey solution. The idea is to avoid being perceived as a software only solution that requires integration with hardware. The solution is to use off the shelf commercially available general purpose servers with the vendors software technology pre integrated and installed ready for use. Thus, tin wrapped software is a turnkey software solution with some tin, or hardware, wrapped around it.

    How about the same with tiered storage?

    That is different tiers (Figure 1) of fast high performance disk including RAM or flash based SSD, fast Fibre Channel or SAS disk drives, or high capacity SAS and SATA disk drives along with magnetic tape as well as cloud based backup or archive?

    Tiered Storage Resources
    Figure 1: Tiered Storage resources

    Tiered storage is also sometimes thought of in terms large enterprise class solutions or midrange, entry level, primary, secondary, near line and offline. Not to be forgotten, there are also tiered networks that support various speeds, convergence, multi tenancy and other capabilities from IO Virtualization (IOV) to traditional LAN, SAN, MAN and WANs including 1Gb Ethernet (1GbE), 10GbE up to emerging 40GbE and 100GbE not to mention various Fibre Channel speeds supporting various protocols.

    The notion around tiered networks is like with servers and storage to enable aligning the right technology to be used for the task at hand economically while meeting service needs.

    Two other common IT resource tiering techniques include facilities and data protection. Tiered facilities can indicate size, availability, resiliency among other characteristics. Likewise, tiered data protection is aligning the applicable technology to support different RTO and RPO requirements for example using synchronous replication where applicable vs. asynchronous time delayed for longer distance combined with snapshots. Other forms of tiered data protection include traditional backups either to disk, tape or cloud.

    There is a new emerging form of tiering in many IT environments and that is tiered virtualization or specifically tiered server hyper visors in virtual data centers with similar objectives to having different server, storage, network, data protection or facilities tiers. Instead of an environment running all VMware, Microsoft HyperV or Xen among other hyper visors may be deployed to meet different application service class requirements. For example, VMware may be used for premium features and functionality on some applications, where others that do not need those features along with requiring lower operating costs leverage HyperV or Zen based solutions. Taking the tiering approach a step further, one could also declare tiered databases for example Oracle legacy vs. MySQL or Microsoft SQLserver among other examples.

    What about IT clouds, are those different types of resources, or, essentially an extension of existing IT capabilities for example cloud storage being another tier of data storage?

    There is another form of tiering, particularly during the winter months in the northern hemisphere where there is an abundance of snow this time of the year. That is, tiered snow management, removal or movement technologies.

    What about tiered snow removal?

    Well lets get back to that then.

    Like IT resources, there are different technologies that can be used for moving, removing, melting or managing snow.

    For example, I cant do much about getting ready of snow other than pushing it all down the hill and into the river, something that would take time and lots of fuel, or, I can manage where I put snow piles to be prepared for next storm, plus, to help put it where the piles of snow will melt and help avoid spring flood. Some technologies can be used for relocating snow elsewhere, kind of like archiving data onto different tiers of storage.

    Regardless of if snowstorm or IT clouds (public or private), virtual, managed service provider (MSP), hosted or traditional IT data centers, all require physical servers, storage, I/O and data networks along with software including management tools.

    Granted not all servers, storage or networking technology let alone software are the same as they address different needs. IT resources including servers, storage, networks, operating systems and even hyper visors for virtual machines are often categorized and aligned to different tiers corresponding to needs and characteristics (Figure 2).

    Tiered IT Resources
    Figure 2: Tiered IT resources

    For example, in figure 3 there is a light weight plastic shovel (Shove 1) for moving small amounts of snow in a wide stripe or pass. Then there is a narrow shovel for digging things out, or breaking up snow piles (Shovel 2). Also shown are a light duty snow blower (snow thrower) capable of dealing with powdery or non wet snow, grooming in tight corners or small areas.

    Tiered Snow tools
    Figure 3: Tiered Snow management and migration tools

    For other light dustings, a yard leaf blower does double duty for migrating or moving snow in small or tight corners such as decks, patios or for cleanup. Larger snowfalls, or, where there is a lot of area to clear involves heavier duty tools such as the Kawasaki mule with 5 foot curtis plow. The mule is a multifunction, multi protocol tool capable of being used for hauling items, towing, pulling or recreational tasks.

    When all else fails, there is a pickup truck to get or go out and about, not to mention to pull other vehicles out of ditches or piles of snow when they become stuck!

    Snow movement
    Figure 4: Sometimes the snow light making for fast, low latency migration

    Snow movement
    Figure 5: And sometimes even snow migration technology goes off line!

    Snow movement

    And that is it for now!

    Enjoy the northern hemisphere winter and snow while it lasts, make the best of it with the right tools to simplify the tasks of movement and management, similar to IT resources.

    Keep in mind, its about the tools and when along with how to use them for various tasks for efficiency and effectiveness, and, a bit of snow fun.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Behind the Scenes, SANta Claus Global Cloud Story

    There is a ton of discussion, stories, articles, videos, conferences and blogs about the benefits and value proposition of cloud computing. Not to mention, discussion or debates about what is or what is not a cloud or cloud product, service or architecture including some perspectives and polls from me.

    Now SANta does not really care about these and other similar debates I have learned. However he is concerned with who has been naughty and nice as well watching out for impersonators or members of his crew who misbehave.

    In the spirit of the holidays, how about a quick look at how SANta leverages cloud technologies to support his global operations.

    Many in IT think that SANta bases his operations out of the North Pole as it is convenient for him to cool all of his servers, storage, networks and telecom equipment (which it is). However its also centrally located (See chart) for the northern hemisphere (folks down under may get serviced via SANtas secret Antarctica base of operations). Just like ANC (Anchorage International Airport) is a popular cargo transient, transload and refueling base for cargo carriers, SANta also leverages the north and South Pole regions to his advantage.

    Great Circle Mapper
    SANtas Global Reach via Great Circle Mapper

    Now do not worry if you have never heard about SANta dual redundant South Pole operations, its one of his better kept secrets. Many organizations including SANtas partners such as Microsoft that have global mega IT operations and logistics centers have followed SANtas lead of leveraging various locations outside of the pacific northwest. Granted like some of his partners and managed service providers, he does maintain a presence in Washington Columbia river basin which provides a nice PR among other benefits.

    Likewise, many in business as well as those in IT think that SANta leverages cloud technologies for cost savings or avoidance which is partially the case. However he also leverages cloud, hosting, managed service provider (MSP), virtual data centers, virtual operations centers, Xaas, SaaS or SOA technologies, services, protocols and products that are transparent and complimentary to his own in house resources addressing various business and service requirement needs.

    What this has to do with the holidays and clouds is that you may not realize how Santa or St. Nick if you prefer (feel free to plug in whoever you like if Santa or St. Nick does not turn your crank) extensively relies on flexible and scalable resilient technologies for boosting productivity in a cost effective manner. Some of it is IT related, some of it is not. For example, from the GPS and Radar along with recently added RNP and RNAV enhanced capabilities to his increasingly high tech bio fueled powered sleigh, not to mention his information technology (IT) that powers his global operations, old St Nick has got it together when it comes to technology.

    The heart or brains of the SANta operation is his global system operations center (SOC) or network operation center (NOC) that rivals those seen at NASA among others with multiple data feeds. The SOC is a 24×365 operations function that covers all aspects from transportation, logistics, distribution, assembly or packaging, financials back office, CRM, IT and communications among other functions.

    Naturally, like the Apollo moon shots whose Grumman built LEM Lunar lander had to have 100% availability in that to get off of the moon, their engines only had to fire once, however it had to work 100% of the time! This thought process is said to have had leveraged principles from SANtas operations guide where he has one night a year to accomplish the impossible.

    I should mention, while I cannot disclose (due to NDA) the exact locations of the SOCs, data or logistics centers, not to mention the vendors or the technology being used, I can tell you that they are all around you! The fully redundant SOCs, data and call centers as well as logistics sites (including staff, facilities, technology) leverage different time zones for efficiency.

    SANtas staff have also found that the redundant SOCs, part of an approach across Santa entire vast organization has helped to guard against global epidemics and pandemics including SARs and H1N1 among others by isolating workers while providing appropriate coverage and availability, something many large organizations have since followed.

    Carrying through on the philosophy of redundant SOCs, all other aspects of SANtas operations are distributed yet with centralized coordinated management, leveraging real-time situation awareness, event and activity correlation (what we used to call or refer to as AI), cross technology domain management, proactive monitoring and planning yet with ability for on the spot decision making.

    What this means is that the various locations have ability to make localized decisions on the spot. However coordinated with primary operations or mission control to streamline global operations focus on strategic activity along with exceptions handling to be more effective. Thus it is not fully distributed nor fully centralized, rather a hybrid in terms of management, technologies and the way they work.

    For example, to handle the diverse applications, there are some primary large processing and data retention facilities that backup, replicate information to other peer sites as well as smaller regional remote office branch offices close to where information services are needed. To say the environment is highly virtualized would be an understatement.

    Likewise, optimization is key not just to keep costs low or avoid overheating some of SANtas facilities that are located in the Arctic and Antarctic regions that could melt the ice cap; they are also optimized to keep response time as low as possible while boosting productivity.

    Thus, SANta has to rely on very robust and diverse communications networking leveraging LAN, SAN, MAN, WAN, POTS and PANs among other technologies. For example, his communications portfolio is said to involves landlines (copper and optical), RF including microwave and other radio based commutations supporting or using 3G, 4G, MPLS, SONET/SCH, xWDM, Microwave and Free space optics among others.

    SANtas networking and communications elves are also said to be working with 5G and 100GbE multiplexed on 256 lambda WDM trunk circuits in non core trunk applications. Of course given the airborne operations, satellite and ACARS are a must to avoid over flying a destination while remaining in positive control during low visibility. Note that Santa routinely makes more CAT 3+ low visibility landings than most of the worlds airlines, air freight companies combined.

    My sources also tell me that SANta has virtual desktop capability leveraging PCoIP and other optimizations on his primary and backup sleighs enabling rapid reconfiguration for changing workload conditions. He also is fully equipped with onboard social media capabilities for updates via twitter, Face book and Linked In among others designed by his chief social networking elf.

    Consequently, given the vast amount of information needed to support his operations from CRM, shipping, tracking not to mention historical and profiling needs, transactional volumes both on the data as well as voice and social media networks dwarf the stock market trading volume.

    Feeding SANtas vast organizations are online highly available robust databases for transactions purposes, reference unstructured data material including videos, websites and more. Some of which look hauntingly familiar given those that are part of SANtas eWorld Helpers initiative including: Sears, Amazon, NetFlix, Target, Albertsons, Staples, EMC, Wall mart, Overstock, RadioShack, Landsend, Dell, HP, eBay, Lowes, Publix, emusic, Riteaid and Supervalu among others (Im just sayin…).

    The actual size of SANta information repository is a closely regarded secret as is the exact topology, schema and content structure. However it is understood that on peak days SANtas highly distributed high performance, low latency data warehouse sees upwards of 1,225PBytes of data added, one that is rumored to make Larry Ellison gush with excitement over its growth possibilities.

    How does SANta pull this all off is by leveraging virtualization, automation, efficient and enabling technologies that allow him and elves (excuse me, associates or team members) to be more productivity in their areas of focus that is the envy of the universe.

    Some of their efficiency is measured in terms of:

    • How many packages can be processed per elf with minimum or no mistakes
    • Number of calls, requests, inquiries per day per elf in a friendly and understandable manner
    • Knowing who has been naughty or nice in the blink of an eye including historical profiles
    • Virtual machines (VM) or physical machine (PM) servers managed per team member
    • Databases and applications, local and remote, logical and physical per team member
    • Storage in terms of PByte and Exabyte managed to given service level per team member
    • Network circuits and bandwidth with fewest dropped packets (or packages) per member
    • Fewest misdirected packages as well as aborted landings per crew
    • Fewest pounds gained from consumption of most milk and cookies per crew

    From how many packages can be processed per hour, to the number of virtual servers per person, PBytes of data managed per person, network connections and circuits per person, databases and applications per person to takes and landings (SANta has the top of the list for this one), they are all highly efficient and effective.

    Likewise, SANta leverages the partners in his SANtas eWORLD Helpers initiative network to help out where of course he looks for value; however value is not just lowest price per VM, lowest cost per TByte or cost per bandwidth. For SANta it is also very focused on performance, availability, capacity and economic efficiency not to mention quality with an environmentally friendly green supply chain.

    By having a green supply chain, SANta leverages from a responsible, global approach that also makes economic sense on where to manufacture and produce or procure products. Contrary to growing popular belief, locally produced may not always be the most environmentally as well as economically favorable approach. For example (read more here), instead of growing flowers and plans in western Europe where they are consumed, a process that would require more energy for heat, lights, not to mention water and other resources. SANta has bucked the trend instead relying on the economics and environmental benefit of leveraging flowers and plants grown in warmer, sunnier climates.

    Granted and rest assured, SANta still has an army of elves busily putting things together in his own factories along with managing IT related activities in a economically positive manner.

    SANta has also leveraged this thinking to his data and information and communications networks leveraging sites such as in the arctic where solar power can be used during summer months along with cooling economizers to offset the impact of batteries, workload is shifted around the world as needed. This approach is rumored to be the envy of the US EPA Energy Star for Server, Storage and Data Center crew not to mention their followers.

    How does SANta make sure all of the data and information is protected and available? Its a combination of best practices, techniques, technologies including hardware, software, data protection management tools, disk, dedupe, compression, tape and cloud among others.

    Rest assured, if it is in the technology buzzword bingo book, it is a good bet that it has been tested in one of SANtas facilities, or, partner sites long before you hear about it even under a strict NDA discussion with one of his elves (opps, I mean supplier partners).

    When asked of the importance of his information and data networks, resources and cloud enabled highly virtualized efficient operations SANta responded with a simple:

    Ho Ho Ho, Merry Christmas to all, and to all, a good night!

    As you sit back and relax, reflect, recreate, recoup or recharge, or whatever it is that you do this time of the year, take a moment to think about and thank all of SANtas helpers. They are the ones that work behind the scenes in SANtas facilities as well as his partners or suppliers, some in the clouds, some on or underground to make the worlds largest single event day (excuse me, night) possible! Or, is this SANta and cloud thing all just one big fantasy?

    Happy and safe holidays or whatever you want to refer to it as, best wishes and thanks!

    BTW: FTC disclosure information can be found here!

    Greg on Break

    Me on a break during tour SANta site tour

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    I/O Virtualization (IOV) Revisited

    Is I/O Virtualization (IOV) a server topic, a network topic, or a storage topic (See previous post)?

    Like server virtualization, IOV involves servers, storage, network, operating system, and other infrastructure resource management areas and disciplines. The business and technology value proposition or benefits of converged I/O networks and I/O virtualization are similar to those for server and storage virtualization.

    Additional benefits of IOV include:

    • Doing more with what resources (people and technology) already exist or reduce costs
    • Single (or pair for high availability) interconnect for networking and storage I/O
    • Reduction of power, cooling, floor space, and other green efficiency benefits
    • Simplified cabling and reduced complexity for server network and storage interconnects
    • Boosting servers performance to maximize I/O or mezzanine slots
    • reduce I/O and data center bottlenecks
    • Rapid re-deployment to meet changing workload and I/O profiles of virtual servers
    • Scaling I/O capacity to meet high-performance and clustered application needs
    • Leveraging common cabling infrastructure and physical networking facilities

    Before going further, lets take a step backwards for a few moments.

    To say that I/O and networking demands and requirements are increasing is an understatement. The amount of data being generated, copied, and retained for longer periods of time is elevating the importance of the role of data storage and infrastructure resource management (IRM). Networking and input/output (I/O) connectivity technologies (figure 1) tie facilities, servers, storage tools for measurement and management, and best practices on a local and wide area basis to enable an environmentally and economically friendly data center.

    TIERED ACCESS FOR SERVERS AND STORAGE
    There is an old saying that the best I/O, whether local or remote, is an I/O that does not have to occur. I/O is an essential activity for computers of all shapes, sizes, and focus to read and write data in and out of memory (including external storage) and to communicate with other computers and networking devices. This includes communicating on a local and wide area basis for access to or over Internet, cloud, XaaS, or managed services providers such as shown in figure 1.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 1 The Big Picture: Data Center I/O and Networking

    The challenge of I/O is that some form of connectivity (logical and physical), along with associated software is required along with time delays while waiting for reads and writes to occur. I/O operations that are closest to the CPU or main processor should be the fastest and occur most frequently for access to main memory using internal local CPU to memory interconnects. In other words, fast servers or processors need fast I/O, either in terms of low latency, I/O operations along with bandwidth capabilities.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 2 Tiered I/O and Networking Access

    Moving out and away from the main processor, I/O remains fairly fast with distance but is more flexible and cost effective. An example is the PCIe bus and I/O interconnect shown in Figure 2, which is slower than processor-to-memory interconnects but is still able to support attachment of various device adapters with very good performance in a cost effective manner.

    Farther from the main CPU or processor, various networking and I/O adapters can attach to PCIe, PCIx, or PCI interconnects for backward compatibility to support various distances, speeds, types of devices, and cost factors.

    In general, the faster a processor or server is, the more prone to a performance impact it will be when it has to wait for slower I/O operations.

    Consequently, faster servers need better-performing I/O connectivity and networks. Better performing means lower latency, more IOPS, and improved bandwidth to meet application profiles and types of operations.

    Peripheral Component Interconnect (PCI)
    Having established that computers need to perform some form of I/O to various devices, at the heart of many I/O and networking connectivity solutions is the Peripheral Component Interconnect (PCI) interface. PCI is an industry standard that specifies the chipsets used to communicate between CPUs and memory and the outside world of I/O and networking device peripherals.

    Figure 3 shows an example of multiple servers or blades each with dedicated Fibre Channel (FC) and Ethernet adapters (there could be two or more for redundancy). Simply put the more servers and devices to attach to, the more adapters, cabling and complexity particularly for blade servers and dense rack mount systems.
    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 3 Dedicated PCI adapters for I/O and networking devices

    Figure 4 shows an example of a PCI implementation including various components such as bridges, adapter slots, and adapter types. PCIe leverages multiple serial unidirectional point to point links, known as lanes, in contrast to traditional PCI, which used a parallel bus design.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

    Figure 4 PCI IOV Single Root Configuration Example

    In traditional PCI, bus width varied from 32 to 64 bits; in PCIe, the number of lanes combined with PCIe version and signaling rate determine performance. PCIe interfaces can have 1, 2, 4, 8, 16, or 32 lanes for data movement, depending on card or adapter format and form factor. For example, PCI and PCIx performance can be up to 528 MB per second with a 64 bit, 66 MHz signaling rate, and PCIe is capable of over 4 GB (e.g., 32 Gbit) in each direction using 16 lanes for high-end servers.

    The importance of PCIe and its predecessors is a shift from multiple vendors’ different proprietary interconnects for attaching peripherals to servers. For the most part, vendors have shifted to supporting PCIe or early generations of PCI in some form, ranging from native internal on laptops and workstations to I/O, networking, and peripheral slots on larger servers.

    The most current version of PCI, as defined by the PCI Special Interest Group (PCISIG), is PCI Express (PCIe). Backwards compatibility exists by bridging previous generations, including PCIx and PCI, off a native PCIe bus or, in the past, bridging a PCIe bus to a PCIx native implementation. Beyond speed and bus width differences for the various generations and implementations, PCI adapters also are available in several form factors and applications.

    Traditional PCI was generally limited to a main processor or was internal to a single computer, but current generations of PCI Express (PCIe) include support for PCI Special Interest Group (PCI) I/O virtualization (IOV), enabling the PCI bus to be extended to distances of a few feet. Compared to local area networking, storage interconnects, and other I/O connectivity technologies, a few feet is very short distance, but compared to the previous limit of a few inches, extended PCIe provides the ability for improved sharing of I/O and networking interconnects.

    I/O VIRTUALIZATION(IOV)
    On a traditional physical server, the operating system sees one or more instances of Fibre Channel and Ethernet adapters even if only a single physical adapter, such as an InfiniBand HCA, is installed in a PCI or PCIe slot. In the case of a virtualized server for example, Microsoft HyperV or VMware ESX/vSphere the hypervisor will be able to see and share a single physical adapter, or multiple adapters, for redundancy and performance to guest operating systems. The guest systems see what appears to be a standard SAS, FC or Ethernet adapter or NIC using standard plug-and-play drivers.

    Virtual HBA or virtual network interface cards (NICs) and switches are, as their names imply, virtual representations of a physical HBA or NIC, similar to how a virtual machine emulates a physical machine with a virtual server. With a virtual HBA or NIC, physical NIC resources are carved up and allocated as virtual machines, but instead of hosting a guest operating system like Windows, UNIX, or Linux, a SAS or FC HBA, FCoE converged network adapter (CNA) or Ethernet NIC is presented.

    In addition to virtual or software-based NICs, adapters, and switches found in server virtualization implementations, virtual LAN (VLAN), virtual SAN (VSAN), and virtual private network (VPN) are tools for providing abstraction and isolation or segmentation of physical resources. Using emulation and abstraction capabilities, various segments or sub networks can be physically connected yet logically isolated for management, performance, and security purposes. Some form of routing or gateway functionality enables various network segments or virtual networks to communicate with each other when appropriate security is met.

    PCI-SIG IOV
    PCI SIG IOV consists of a PCIe bridge attached to a PCI root complex along with an attachment to a separate PCI enclosure (Figure 5). Other components and facilities include address translation service (ATS), single-root IOV (SR IOV), and multiroot IOV (MR IOV). ATS enables performance to be optimized between an I/O device and a servers I/O memory management. Single root, SR IOV enables multiple guest operating systems to access a single I/O device simultaneously, without having to rely on a hypervisor for a virtual HBA or NIC.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

    Figure 5 PCI SIG IOV

    The benefit is that physical adapter cards, located in a physically separate enclosure, can be shared within a single physical server without having to incur any potential I/O overhead via virtualization software infrastructure. MR IOV is the next step, enabling a PCIe or SR IOV device to be accessed through a shared PCIe fabric across different physically separated servers and PCIe adapter enclosures. The benefit is increased sharing of physical adapters across multiple servers and operating systems not to mention simplified cabling, reduced complexity and resource utilization.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 6 PCI SIG MR IOV

    Figure 6 shows an example of a PCIe switched environment, where two physically separate servers or blade servers attach to an external PCIe enclosure or card cage for attachment to PCIe, PCIx, or PCI devices. Instead of the adapter cards physically plugging into each server, a high performance short-distance cable connects the servers PCI root complex via a PCIe bridge port to a PCIe bridge port in the enclosure device.

    In figure 6, either SR IOV or MR IOV can take place, depending on specific PCIe firmware, server hardware, operating system, devices, and associated drivers and management software. For a SR IOV example, each server has access to some number of dedicated adapters in the external card cage, for example, InfiniBand, Fibre Channel, Ethernet, or Fibre Channel over Ethernet (FCoE) and converged networking adapters (CNA) also known as HBAs. SR IOV implementations do not allow different physical servers to share adapter cards. MR IOV builds on SR IOV by enabling multiple physical servers to access and share PCI devices such as HBAs and NICs safely with transparency.

    The primary benefit of PCI IOV is to improve utilization of PCI devices, including adapters or mezzanine cards, as well as to enable performance and availability for slot-constrained and physical footprint or form factor-challenged servers. Caveats of PCI IOV are distance limitations and the need for hardware, firmware, operating system, and management software support to enable safe and transparent sharing of PCI devices. Examples of PCIe IOV vendors include Aprius, NextIO and Virtensys among others.

    InfiniBand IOV
    InfiniBand based IOV solutions are an alternative to Ethernet-based solutions. Essentially, InfiniBand approaches are similar, if not identical, to converged Ethernet approaches including FCoE, with the difference being InfiniBand as the network transport. InfiniBand HCAs with special firmware are installed into servers that then see a Fibre Channel HBA and Ethernet NIC from a single physical adapter. The InfiniBand HCA also attaches to a switch or director that in turn attaches to Fibre Channel SAN or Ethernet LAN networks.

    The value of InfiniBand converged networks are that they exist today, and they can be used for consolidation as well as to boost performance and availability. InfiniBand IOV also provides an alternative for those who do not choose to deploy Ethernet.

    From a power, cooling, floor-space or footprint standpoint, converged networks can be used for consolidation to reduce the total number of adapters and the associated power and cooling. In addition to removing unneeded adapters without loss of functionality, converged networks also free up or allow a reduction in the amount of cabling, which can improve airflow for cooling, resulting in additional energy efficiency. An example of a vendor using InfiniBand as a platform for I/O virtualization is Xsigo.

    General takeaway points include the following:

    • Minimize the impact of I/O delays to applications, servers, storage, and networks
    • Do more with what you have, including improving utilization and performance
    • Consider latency, effective bandwidth, and availability in addition to cost
    • Apply the appropriate type and tiered I/O and networking to the task at hand
    • I/O operations and connectivity are being virtualized to simplify management
    • Convergence of networking transports and protocols continues to evolve
    • PCIe IOV is complimentary to converged networking including FCoE

    Moving forward, a revolutionary new technology may emerge that finally eliminates the need for I/O operations. However until that time, or at least for the foreseeable future, several things can be done to minimize the impacts of I/O for local and remote networking as well as to simplify connectivity.

    Learn more about IOV, converged networks, LAN, SAN, MAN and WAN related topics in Chapter 9 (Networking with your servers and storage) of The Green and Virtual Data Center (CRC) as well as in Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier).

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Could Huawei buy Brocade?

    Disclosure: I have no connection to Huawei. I own no stock in, nor have I worked for Brocade as an employee; however I did work for three years at SAN vendor INRANGE which was acquired by CNT. However I left to become an industry analyst prior to the acquisition by McData and well before Brocade bought McData. Brocade is not a current client; however I have done speaking events pertaining to general industry trends and perspectives at various Brocade customer events for them in the past.

    Is Brocade for sale?

    Last week a Wall Street Journal article mentioned Brocade (BRCD) might be for sale.

    BRCD has a diverse product portfolio for Fibre Channel, Ethernet along with the emerging Fibre Channel over Ethernet (FCoE) market and a whos who of OEM and channel partners. Why not be for sale, good timing for investors, CEO Mike Klayko and his team have arguably done a good job of shifting and evolving the company.

    Generally speaking, lets keep in perspective, everything is always for sale, and in an economy like now, bargains are everywhere. Many business are shopping, its just a matter of how visible the shopping for a seller or buyer is along with motivations and objectives including shareholder value.

    Consequently, the coconut wires are abuzz with talk and speculation of who will buy Brocade or perhaps who Brocade might buy among other Merger and Acquisition (M and A) activity of who will buy who. For example, who might buy BRCD, why not EMC (they sold McData off years ago via IPO), or IBM (they sold some of their networking business to Cisco years ago) or HP (currently an OEM partner of BRCD) as possible buyers?

    Last week I posted on twitter a response to a comment about who would want to buy Brocade with a response to the effect of why not a Huawei to which there was some silence except for industry luminary Steve Duplessie (have a look to see what Steve had to say).

    Part of being an analyst IMHO should be to actually analyze things vs. simply reporting on what others want you to report or what you have read or hear elsewhere. This also means talking about scenarios that are of out of the box or in adjacent boxes from some perspectives or that might not be in-line with traditional thinking. Sometimes this means breaking away and thinking and saying what may not be obvious or practical. Having said that, lets take a step back for a moment as to why Brocade may or might not be for sale and who might or may not be interested in them.

    IMHO, it has a lot to do with Cisco and not just because Brocade sees no opportunity to continue competing with the 800lb guerilla of LAN/MAN networking that has moved into Brocades stronghold of storage network SANs. Cisco is upsetting the table or apple cart with its server partners IBM, Dell, HP, Oracle/Sun and others by testing the waters of the server world with their UCS. So far I see this as something akin to a threat testing the defenses of a target before actually full out attacking.

    In other words, checking to see how the opposition responds, what defense are put up, collect G2 or intelligence as well as how the rest of the world or industry might respond to an all out assault or shift of power or control. Of course, HP, IBM, Dell and Sun/Oracle will not take this move into their revenue and account control goes un-noticed with initial counter announcements having been made some re-emphasize relationship with Brocade along with their recent acquisition of Ethernet/IP vendor Foundry.

    Now what does this have to do with Brocade potentially being sold and why the title involving Huawei?

    Many of the recent industry acquisitions have been focused on shoring up technology or intellectual property (IP), eliminating a competitor or simply taking advantage of market conditions. For example, Datadomain was sold to EMC in a bidding war with NetApp, HP bought IBRIX, Oracle bought or is trying to buy Sun, Oracle also bought Virtual Iron, Dell bought Perot after HP bought EDS a year or so ago while Xerox bought ACS and so the M and A game continues among other deals.

    Some of the deals are strategic, many being tactical, Brocade being bought I would put in the category of a strategic scenario, a bargaining chip or even pawn if you prefer in a much bigger game that is more than about switches, directors, HBAs, LANs, SANs, MANSs, WANs, POTS and PANs (Checkout my  book “Resilient Storage Networks”-Elsevier)!

    So with conversations focused around Cisco expanding into servers to control the data center discussion, mindset, thinking, budgets and decision making, why wouldnt an HP, IBM, Dell let alone a NetApp, Oracle/Sun or even EMC want to buy Brocade as a bargaining chip in a bigger game? Why not a Ciena (they just bought some of Nortels assets), Juniper or 3Com (more of a merger of equals to fight Cisco), Microsoft (might upset their partner Cisco) or Fujitsu (Their Telco group that is) among others?

    Then why not Huawei, a company some may have heard of, one that others may not have.

    Who is Huawei you might ask?

    Simple, they are a very large IT solutions provider who is also a large player in China with global operations including R&D in North America and many partnerships with U.S. vendors. By rough comparison, Cisco most recently reported annual revenue are about 36.1B (All are USD), BRCD about 1.5B, Juniper about $3.5B and 3COM about $1.3B and Huawei at about 23B USD with a year over year sales increase of 45%. Huawei has previous partnerships with storage vendors including Symantec and Falconstor among others. Huawei also has had partnership with 3com (H3C), a company that was first of the LAN vendors to get into SANs (pre-maturely) beating Cisco easily by several years.

    Sure there would be many hurdles and issues, similar to the ones CNT and INRANGE had to overcome, or McData and CNT, or Brocade and McData among others. However in the much bigger game of IT account and thus budget control is played by HP, IBM, and Sun/Oracle among others, wouldn’t maintaining a dual-source for customers networking needs make sense, or, at least serve as a check to Cisco expansion efforts? If nothing else, maintaining the status quo in the industry for now, or, if the rules and game are changing, wouldn’t some of the bigger vendors want to get closer to the markets where Huawei is seeing rapid growth?

    Does this mean that Brocade could be bought? Sure.
    Does this mean Brocade cannot compete or is a sign of defeat? I don’t think so.
    Does this mean that Brocade could end up buying or merging with someone else? Sure, why not.
    Or, is it possible that someone like Huawei could end up buying Brocade? Why not!

    Now, if Huawei were to buy Brocade, which begs the question for fun, could they be renamed or spun off as a division called HuaweiCade or HuaCadeWei? Anything is possible when you look outside the box.

    Nuff said for now, food for thought.

    Cheers – gs

    Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

    I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)

    Ok, so I should have used that intro last week before heading off to VMworld in San Francisco instead of after the fact.

    Think of it as a high latency title or intro, kind of like attaching a fast SSD to a slow, high latency storage controller, or a fast server attached to a slow network, or fast network with slow storage and servers, it is what it is.

    I/O virtualization (IOV), Virtual I/O (VIO) along with I/O and networking convergence have been getting more and more attention lately, particularly on the convergence front. In fact one might conclude that it is trendy to all of a sudden to be on the IOV, VIO and convergence bandwagon given how clouds, soa and SaaS hype are being challenged, perhaps even turning to storm clouds?

    Lets get back on track, or in the case of the past week, get back in the car, get back in the plane, get back into the virtual office and what it all has to do with Virtual I/O and VMworld.

    The convergence game has at its center Brocade emanating from the data center and storage centric I/O corner challenging Cisco hailing from the MAN, WAN, LAN general networking corner.

    Granted both vendors have dabbled with success in each others corners or areas of focus in the past. For example, Brocade as via acquisitions (McData+Nishan+CNT+INRANGE among others) a diverse and capable stable of local and long distance SAN connectivity and channel extension for mainframe and open systems supporting data replication, remote tape and wide area clustering. Not to mention deep bench experience with the technologies, protocols and partners solutions for LAN, MAN (xWDM), WAN (iFCP, FCIP, etc) and even FAN (file area networking aka NAS) along with iSCSI in addition to Fibre Channel and FICON solutions.

    Disclosure: Here’s another plug ;) Learn more about SANs, LANs, MANs, WANs, POTs, PANs and related technologies and techniques in my book “Resilient Storage NetworksDesigning Flexible Scalable Data Infrastructures" (Elsevier).

    Cisco not to be outdone has a background in the LAN, MAN, WAN space directly, or similar to Brocade via partnerships with product and experience and depth. In fact while many of my former INRANGE and CNT associates ended up at Brocade via McData or in-directly, some ended up at Cisco. While Cisco is known for general networking, the past several years they have gone from zero to being successful in the Fibre Channel and yes, even the FICON mainframe space while like Brocade (HBAs) dabbling in other areas like servers and storage not to mention consumer products.

    What does this have to do with IOV and VIO, let alone VMworld and my virtual office, hang on, hold that thought for a moment, lets get the convergence aspect out of the way first.

    On the I/O and networking convergence (e.g. Fibre Channel over Ethernet – FCoE) scene both Brocade (Converged Enhanced Ethernet-CEE) and Cisco (Data Center Ethernet – DCE) along with their partners are rallying around each others camps. This is similar to how a pair of prize fighters maneuvers in advance of a match including plenty of trash talk, hype and all that goes with it. Brocade and Cisco throwing mud balls (or spam) at each other, or having someone else do it is nothing new, however in the past each has had their core areas of focus coming from different tenets in some cases selling to different people in an IT environment or those in VAR and partner organizations. Brocade and Cisco are not alone nor is the I/O networking convergence game the only one in play as it is being complimented by the IOV and VIO technologies addressing different value propositions in IT data centers.

    Now on to the IOV and VIO aspect along with VMworld.

    For those of you that attended VMworld and managed to get outside of session rooms, or media/analyst briefing or reeducation rooms, or out of partner and advisory board meetings walking the expo hall show floor, there was the usual sea of vendors and technology. There were the servers (physical and virtual), storage (physical and virtual), terminals, displays and other hardware, I/O and networking, data protection, security, cloud and managed services, development and visualization tools, infrastructure resource management (IRM) software tools, manufactures and VARs, consulting firms and even some analysts with booths selling their wares among others.

    Likewise, in the onsite physical data center to support the virtual environment, there were servers, storage, networking, cabling and associated hardware along with applicable software and tucked away in all of that, there were also some converged I/O and networking, and, IOV technologies.

    Yes, IOV, VIO and I/O networking convergence were at VMworld in force, just ask Jon Torr of Xsigo who was beaming like a proud papa wanting to tell anyone who would listen that his wares were part of the VMworld data center (Disclosure: Thanks for the T-Shirt).

    Virtensys had their wares on display with Bob Nappa more than happy to show the technology beyond an UhiGui demo including how their solution includes disk drives and an LSI MegaRAID adapter to support VM boot while leveraging off-the shelf or existing PCIe adapters (SAS, FC, FCoE, Ethernet, SATA, etc.) while allowing adapter sharing across servers, not to mention, they won best new technology at VMworld award.

    NextIO who is involved in the IOV / VIO game was there along with convergence vendors Brocade, Cisco, Qlogic and Emulex among others. Rest assured, there are many other vendors and VARs in the VIO and IOV game either still in stealth, semi-stealth or having recently launched.

    IOV and VIO are complimentary to I/O and networking convergence in that solutions like those from Aprius, Virtensys, Xsigo, NextIO and others. While they sound similar, and in fact there is confusion as to if Fibre Channel N_Port Virtual ID (FC_NPVID) and VMware virtual adapters are IOV and VIO vs. solutions that are focused on PCIe device/resource extension and sharing.

    Another point of confusion around I/O virtualization and virtual I/O are blade system or blade center connectivity solutions such as HP Virtual Connect or IBM Fabric Manger not to mention those form Engenera add confusion to the equation. Some of the buzzwords that you will be hearing and reading more about include PCIe Single Root IOV (SR-IOV) and Multi-Root IOV (MR-IOV). Think of it this way, within VMware you have virtual adapters, and Fibre Channel Virtualization N_Port IDs for LUN mapping/masking, zone management and other tasks.

    IOV enables localized sharing of physical adapters across different physical servers (blades or chassis) with distances measured in a few meters; after all, it’s the PCIe bus that is being extended. Thus, it is not a replacement for longer distance in the data center solutions such as FCoE or even SAS for that matter, thus they are complimentary, or at least should be considered complimentary.

    The following are some links to previous articles and related material including an excerpt (yes, another plug ;)) from chapter 9 “Networking with you servers and storage” of new book “The Green and Virtual Data Center” (CRC). Speaking of virtual and physical, “The Green and Virtual Data Center” (CRC) was on sale at the physical VMworld book store this week, as well as at the virtual book stores including Amazon.com

    The Green and Virtual Data Center

    The Green and Virtual Data Center (CRC) on book shelves at VMworld Book Store

    Links to some IOV, VIO and I/O networking convergence pieces among others, as well as news coverage, comments and interviews can be found here and here with StorageIOblog posts that may be of interest found here and here.

    SearchSystemChannel: Comparing I/O virtualization and virtual I/O benefits – August 2009

    Enterprise Storage Forum: I/O, I/O, It’s Off to Virtual Work We Go – December 2007

    Byte and Switch: I/O, I/O, It’s Off to Virtual Work We Go (Book Chapter Excerpt) – April 2009

    Thus I went to VMworld in San Francisco this past week as much of the work I do is involved with convergence similar to my background, that is, servers, storage, I/O networking, hardware, software, virtualization, data protection, performance and capacity planning.

    As to the virtual work, well, I spent some time on airplanes this week which as is often the case, my virtual office, granted it was real work that had to be done, however I also had a chance to meet up with some fellow tweeters at a tweet up Tuesday evening before getting back in a plane in my virtual office.

    Now, I/O, I/O, its back to real work I go at Server and StorageIO , kind of rhymes doesnt it!

    I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)

    Is There a Data and I/O Activity Recession?

    Storage I/O trends

    With all the focus on both domestic and international economic woes and discussion of recessions and depressions and possible future rapid inflation, recent conversations with IT professionals from organizations of all size across different industry sectors and geographies prompted the question, is there also a data and I/O activity recession?

    Here’s the premise, if you listen to current economic and financial reports as well as employment information, the immediate conclusion is that yes, there should also be an I recession in the form of contraction in the amount of data being processed, moved and stored which would also impact I/O (e.g. DAS,, LAN, SAN, FAN or NAS, MAN, WAN) networking activity as well. After all, the server, storage, I/O and networking vendors earnings are all being impacted right?

    As is often the case, there is more to the story, certainly vendor earnings are down and some vendors are shipping less product than during corresponding periods from a year or more ago. Likewise, I continue to hear from both IT organizations, vars and vendors of lengthened sales cycles due to increased due diligence and more security of IT acquisitions meaning that sales and revenue forecasts continue to be very volatile with some vendors pulling back on their future financial guidance.

    However, does that mean fewer servers, storage, I/O and networking components not to mention less software is being shipped? In some cases there is or has been a slow down. However in other cases, due to pricing pressures, increased performance and capacity density where more work can be done by fewer devices, consolidation, data footprint reduction, optimization, virtualization including VMware and other techniques, not to mention a decrease in some activity, there is less demand. On the other hand, while some retail vendors are seeing their business volume decrease, others such as Amazon are seeing continued heavy demand and activity.

    Been on a trip lately through an airport? Granted the airlines have instituted capacity management (e.g. capacity planning) and fleet optimization to align the number of flights or frequency as well as aircraft type (tiering) to the demand. In some cases smaller planes, in other cases larger planes, for some more stops at a lower price (trade time for money) or in other cases shorter direct routes for a higher fee. The point being is that while there is an economic recession underway, and granted there are fewer flights, many if not most of those flights are full which means transactions and information to process by the airlines reservations and operational as well as customer relations and loyalty systems.

    Mergers and acquisitions usually mean a reduction or consolidation of activity resulting in excess and surplus technologies, yet talking with some financial services organizations, over time some of their systems will be consolidated to achieve operating efficiency and synergies, near term, in some cases, there is the need for more IT resources to support the increased activity of supporting multiple applications, increased customer inquiry and conversion activity.

    On a go forward basis, there is the need to support more applications and services that will generate more I/O activity to enable data to be moved, processed and stored. Not to mention, data being retained in multiple locations for longer periods of time to meet both compliance and non regulatory compliance requirements as well as for BC/DR and business intelligence (BI) or data mining for marketing and other purposes.

    Speaking of the financial sector, while the economic value of most securities is depressed, and with the wild valuation swings in the stock markets, the result is more data to process, move and store on a daily basis, all of which continues to place more demand on IT infrastructure resources including servers, storage, I/O networking, software, facilities and the people to support them.

    Dow Jones Trading Activity Volume
    Dow Jones Trading Activity Volume (Courtesy of data360.org)

    For example, the amount of Dow Jones trading activity is on a logarithmic upward trend curve in the example chart from data360.org which means more transactions selling and buying. The result of more transactions is that there are also an increase in the number of back-office functions for settlement, tracking, surveillance, customer inquiry and reporting among others activities. This means that more I/Os are generated with data to be moved, processed, replicated, backed-up with additional downstream activity and processing.

    Shifting gears, same things with telephone and in particular cell phone traffic which indirectly relates on IT systems particular for support email and other messaging activity. Speaking of email, more and more emails are sent every day, granted many are spam, yet these all result in more activity as well as data.

    What’s the point in all of this?

    There is a common awareness among most IT professionals that there is more data generated and stored every year and that there is also an awareness of the increased threats and reliance upon data and information. However what’s either not as widely discussed is the increase in I/O and networking activity. That is, the space capacity often gets talked about, however, the I/O performance, response time, activity and data movement can be forgotten about or its importance to productivity diminished. So the point is, keep performance, response time, and latency in focus as well as IOPS and bandwidth when looking at, and planning IT infrastructure to avoid data center bottlenecks.

    Finally for now, what’s your take, is there a data and/or I/O networking recession, or is it business and activity as usual?

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved