Stability without compromise or increased complexity
Dimension and direction – Scaling-up (vertical), scaling-out (horizontal), scaling-down
Scaling PACE – Performance Availability Capacity Economics
Often I hear people talk about scaling only in the context of space capacity. However there are aspects including performance, availability as well as scaling-up or scaling-out. Scaling from application workloads perspectives include four main group themes which are performance, availability, capacity and economics (as well as energy).
Performance – Transactions, IOP’s, bandwidth, response time, errors, quality of service
Availability – Accessibility, durability, reliability, HA, BC, DR, Backup/Restore, BR, data protection, security
Capacity – Space to store information or place for workload to run on a server, connectivity ports for networks
Economics – Capital and operating expenses, buy, rent, lease, subscription
Scaling with Stability
The latter of the above items should be thought of more in terms of a by-product, result or goal for implementing scaling. Scaling should not result in a compromise of some other attribute such as increasing performance and loss of capacity or increased complexity. Scaling with stability also means that as you scale in some direction, or across some attribute (e.g. PACE), there should not be a corresponding increase in complexity of management, or loss of performance and availability. To use a popular buzz-term scaling with stability means performance, availability, capacity, economics should scale linear with their capabilities or perhaps cost less.
Some examples of scaling in different directions include:
Scaling-up (vertical scaling with bigger or faster)
Scaling-down (vertical scaling with less)
Scaling-out (horizontal scaling with more of what being scaled)
Scaling-up and out (combines vertical and horizontal)
Of course you can combine the above in various combinations such as the example of scaling up and out, as well as apply different names and nomenclature to see your needs or preferences. The following are a closer look at the above with some simple examples.
Example of scaling up (vertically)
Example of scaling-down (e.g. for smaller scenarios)
Example of scaling-out (horizontally)
Example of scaling-out and up(horizontally and vertical)
Summary and what this means
There are many aspects to scaling, as well as side-effects or impacts as a result of scaling.
Scaling can refer to different workload attributes as well as how to support those applications.
Regardless of what you view scaling as meaning, keep in mind the context of where and when it is used and that others might have another scale view of scale.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Lenovo ThinkServer TD340 Server and StorageIO lab Review
Earlier this year I did a review of the Lenovo ThinkServer TS140 in the StorageIO Labs (see the review here), in fact I ended up buying a TS140 after the review, and a few months back picked up another one. This StorageIOlab review looks at the Lenovo ThinkServer TD340 Tower Server which besides having a larger model number than the TS140, it also has a lot more capabilities (server compute, memory, I/O slots and internal hot-swap storage bays. Pricing varies depending on specific configuration options, however at the time of this post Lenovo was advertising a starting price of $1,509 USD for a specific configuration here. You will need to select different options to decide your specific cost.
The TD340 is one of the servers that Lenovo has had prior to its acquisition of IBM x86 server business that you can read about here. Note that the Lenovo acquisition of the IBM xSeries business group has begun as of early October 2014 and is expected to be completed across different countries in early 2015. Read more about the IBM xSeries business unit here, here and here.
The Lenovo TD340 Experience
Lets start with the overall experience which was very easy other than deciding what make and model to try. This includes going from first answering some questions to get the process moving, agreeing to keep the equipment safe, secure, insured as well as not damaging anything. Part of the process also involved answering some configuration related questions and shortly there after a large box from Lenovo arrived.
TD340 with Keyboard and Mouse (Monitor and keyboard not included)
One of the reasons I have a photo of the TD340 on a desk is that I initially put it in an office environment similar to what I did with the TS140 as Lenovo claimed it would be quiet enough to do so. I was not surprised and indeed the TD340 is quiet enough to be used where you would normally find a workstation or mini-tower. By being so quiet the TD340 is a good fit for environments that need a server that has to go into an office environment as opposed to a server or networking room.
TD340 "Selfie" with 4 x 8GB DDR3 DIMM (32GB) and PCIe slots (empty)
TD340 internal drive hot-swap bays
Speeds and Feeds
The TD340 that I tested was a Machine type 7087 model 002RUX which included 4 x 16GB DIMMs and both processor sockets occupied.
You can view the Lenovo TD340 data sheet with more speeds and feeds here, however the following is a summary.
Operating systems support include various Windows Servers (2008-2012 R2), SUSE, RHEL, Citrix XenServer and VMware ESXi
Form factor is 5U tower with weight starting at 62 pounds depending on how configured
Processors include support for up to two (2) Intel E5-2400 v2 series
Memory includes 12 DDR3 DRAM DIMM slots (LV RDIMM and UDIMM) for up to 129GB.
Expansion slots vary depending on if a single or dual cpu socket. Single CPU socket installed has 1 x PCIe Gen3 FH/HL x8 mechanical, x4 electrical, 1 x PCIe Gen3
FH/HL x16 mechanical, x16 electrical and a single PCI 32bit/33 MHz FH/HL slot. With two CPU sockets installed extra PCIe slots are enabled. These include single x PCIe GEN3: FH/HL x8 mechanical, x4 electrical, single x PCIe GEN3: FH/HL x16 mechanical, x16 electrical, three x PCIe GEN3: FH/HL x8 mechanical, x8 electrical and a single PCI 5V 32-bit/33 MHz: FH/HL
Two 5.25” media bays for CD or DVDs or other devices
Integrated ThinkServer RAID (0/1/10/5) with optional RAID adapter models
Internal storage varies depending on model including up to eight (8) x 3.5” hot swap drives or 16 x 2.5” hot swap drives (HDD’s or SSDs).
Storage space capacity varies by the type and size of the drives being used.
Networking interfaces include two (2) x GbE
Power supply options include single 625 watt or 800 watt, or 1+1 redundant hot-swap 800 watt, five fixed fans.
Management tools include ThinkServer Management Module and diagnostics
What Did I do with the TD340
After initial check out in an office type environment, I moved the TD340 into the lab area where it joined other servers to be used for various things.
Some of those activities included using the Windows Server 2012 Essentials along with associated admin activities as well as installing VMware ESXi 5.5.
TD340 with Keyboard and Mouse (Monitor and keyboard not included)
What I liked
Unbelievably quiet which may not seem like a big deal, however if you are looking to deploy a server or system into a small office workspace, this becomes an important considerations. Otoh, if you are a power user and want a robust server that can be installed into a home media entertainment system, well, this might be a nice to have consideration ;). Speaking of IO slots, naturally I’m interested in Server Storage I/O so having multiple slots is a must have, along with the processor that is multi-core (pretty much standard these days) along with VT and EP for supporting VMware (these were disabled in the BIOS however that was an easy fix).
What I did not like
The only thing I did not like was that I ran into a compatibility issue trying to use a LSI 9300 series 12Gb SAS HBA which Lenovo is aware of, and perhaps has even addressed by now. What I ran into is that the adapters work however I was not able to get the full performance out of the adapters as compared to on other systems including my slower Lenovo TS140s.
Summary
Overall I give Lenovo and the TD340 an "B+" which would have been an "A" had I not gotten myself into a BIOS situation or been able to run the 12Gbps SAS PCIe Gen 3 cards at full speed. Likewise the Lenovo service and support also helped to improve on the experience. Otoh, if you are simply going to use the TD340 in a normal out of the box mode without customizing to add your own adapters or install your own operating system or Hypervisors (beyond those that are supplied as part of the install setup tool kit), you may have an "A" or "A+" experience with the TD340.
Would I recommend the TD340 to others? Yes for those who need this type and class of server for Windows, *nix, Hyper-V or VMware environments.
Would I buy a TD340 for myself? Maybe if that is the size and type of system I need, however I have my eye on something bigger. On the other hand for those who need a good value server for a SMB or ROBO environment with room to grow, the TD340 should be on your shopping list to compare with other solutions.
Disclosure: Thanks to the folks at Lenovo for sending and making the TD340 available for review and a hands on test experience including covering the cost of shipping both ways (the unit should now be back in your possession). Thus this is not a sponsored post as Lenovo is not paying for this (they did loan the server and covered two-way shipping), nor am I paying them, however I have bought some of their servers in the past for the StorageIOLab environment that are companions to some Dell and HP servers that I have also purchased.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Hello and welcome to this joint September and October Server and StorageIO update newsletter. Since the August newsletter, things have been busy with a mix of behind the scenes projects, as well as other activities including several webinars, on-line along with in-person events in the US as well as Europe.
Enjoy this edition of the Server and StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.
In September I was invited to do a key-note opening presentation at the MSP area CMG event. Theme for the September CMG event was "Flash – A Real Life Experience" with a focus of what people are doing, how testing and evaluating including use of hybrid solutions as opposed to vendor marketing sessions. My session was titled "Flash back to reality – Myths and Realities, Flash and SSD Industry trends perspectives plus benchmarking tips and can be found here. Thanks to Tom Becchetti an the MSP CMG (@mspcmg) folks for a great event.
There are many facets to hybrid storage including different types of media (SSD and HDD’s) along with unified or multi-protocol access. Then there are hybrid storage that spans local and public clouds. Here is a link to an on-line Internet Radio show via Information Week along with on-line chat about Hybrid Storage for Government.
Some things I’m working with or keeping an eye on include Cloud, Converged solutions, Data Protection, Business Resiliency, DCIM, Docker, InfiniBand, Microsoft (Hyper-V, SOFS, SMB 3.0), Object Storage, SSD, SDS, VMware and VVOL among others items.
A lot has been going on in the IT industry since the last StorageIO Update newsletter. The following are some StorageIO industry trends perspectives comments that have appeared in various venues. Cloud conversations continue to be popular including concerns about privacy, security and availability. Here are some comments at SearchCloudComputing: about moving on from cloud deployment heartbreak.
Nand flash Solid State Devices (SSD) continue to increase in customer deployments, over at Processor, here are some here are some comments on Incorporating SSD’s Into Your Storage Plan. Also on SSD, here are some perspectives making the Argument For Flash-Based Storage. Some other comments over at Processer.com include looking At Disaster Recovery As A Service, tips to Avoid In Data Center Planning, making the most of Enterprise Virtualization, as well as New Tech, Advancements To Justify Servers. Part of controlling and managing storage costs is having timely insight, metrics that matter, here are some more perspectives and also here.
Some other comments and perspectives are over at EnterpriseStorageForum including Top 10 Ways to Improve Data Center Energy Efficiency. At InfoStor there are comments and tips about Object Storage, while at SearchDataBackup I have some perspectives about Symantec being broken up.
Recent Server and StorageIO tips and articles appearing in various venues include over at SearchCloudStorage a series of discussion often asked question pieces:
StorageIO podcasts are also available via and at StorageIO.tv
From StorageIO Labs
Research, Reviews and Reports
Enterprise 12Gbps SAS and SSD’s Better Together – Part of an Enterprise Tiered Storage Strategy In this StorageIO Industry Trends Perspective thought leadership white paper we look at how enterprise class SSD’s and 12Gbps SAS address current and next generation tiered storage for virtual, cloud, traditional Little and Big Data environments. This report includes proof points running various workloads including Database TPC-B, TPC-E, Microsoft Exchange in the StorageIO Labs along with cache software comparing SSD, SSHD and HDD’s. Read thewhite paper compliments of Seagate 1200 12Gbs SAS SSD’s.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
VMware Cisco EMC VCE Zen and now server storage I/O convergence
In case you have not heard, the joint initiative (JV) founded in the fall of 2009 between Intel VMware Cisco and EMC called VCE had a change of ownership today.
Well, kind of…
Who is VCE and what’s this Zen stuff?
For those not familiar or who need a recap, VCE was created to create converged server, storage I/O networking hardware and software solutions combing technologies from its investors resulting in solutions called vBlocks.
The major investors were Cisco who provides the converged servers and I/O networking along with associated management tools as well as EMC who provides the storage systems along with their associated management tools. Minority investors include VMware (who is majority owned by EMC) who provides the server virtualization aka software defined data center management tools and Intel whose’s processor chip technologies are used in the vBlocks. What has changed from Zen (e.g. yesterday or in the past) and now is that Cisco has sold the majority (they are retaining about 10%) of its investment ownership in VCE to EMC. Learn more about VCE, their solutions and valueware in this post here (VCE revisited, now and Zen).
Activist activating activity?
EMC pulling VCE in-house which should prop up its own internal sales figures by perhaps a few billion USDs within a year or so (if not sooner) is not as appealing to activists investors who want results now such as selling off parts of the company (e.g. EMC, VMware or other assets) or the entire company.
However to the activist investors who want to see things sold to make money they are not happy with EMC off buying or investing it appears.
Via Bloomberg
“The last thing on investors’ minds is the future of VCE,” Daniel Ives, an analyst with FBR Capital Markets, wrote in a note today. “EMC has a fire in its house right now and the company appears focused on painting its bedroom (e.g. VCE), while the Street wants a resolution on the strategic ownership situation sooner rather than later.”
Read more at Bloomberg
Whats this EMC Federation stuff?
Note that EMC has organized itself into a federation that consists of EMC Information Infrastructure (EMCII) or what you might know a traditional EMC based storage and related software solutions, VMware, Pivotal and RSA. Also note that each of those federated companies have their own CEO as well as have holdings or ownership of other companies. However all report to a common federated leadership aka EMC. Thus when you hear EMC that could mean depending on the context the federation mother ship which controls the individual companies, or it could also be used to refer to EMCII aka the traditional EMC. Click here to learn more about the EMC federation.
Converging Markets and Opportunities
Looking beyond near-term or quick gains, EMC could be simply doing something others do to take ownership and control over certain things while reducing complexities associated with joint initiatives. For example with EMC and Cisco in a close partnership with VCE, both parties have been free to explore and take part in other joint initiatives such as Cisco with EMC competitors NetApp, HDS among others. Otoh EMC partners with Arista for networking, not to mention via VMware acquired virtual network or software defined network Nicira now called NSX.
EMC is also in a partnership with Lenovo for developing servers to be used by EMC for various platforms to support storage, data and information services while shifting the lower-end SMB storage offerings such as Iomega to the Lenovo channel.
Note that Lenovo is in the process of absorbing the IBM xSeries (e.g. x86 based) business unit that started closing earlier in October (will take several months to completely close in all countries around the world). For its part Cisco is also partnering with hyper-converged solution provider Simplivity while EMC has announced its statement of direction to bring to market its own hyper-converged platform by end of the year. For those not familiar, Hyper-converged solutions are simply the next evolution of converged or pre-bundled turnkey systems (some of you might have just had a Dejavu moment) that today tend to be targeted for SMBs and ROBOs however used for targeted applications such as VDI in larger environments.
What does this have to do with VCE?
IF EMC is about to release as it has made statement of direction statements of a hyper-converged solution by year-end to compete head-on with those from Nutanix, Simplivity and Tintri as well as perhaps to a lesser extent VMwares EVO:Rail, by having more control over VCE means reducing if not eliminating complexity around vBlocks which are Cisco based with EMC storage vs. what ever EMC brings to market for hyper-converged. In the past under the VCE initiatives storage was limited to EMC and servers along with networking from Cisco, hypervisors from VMware, however what happens in the future remains to be seen.
Does this mean EMC is moving even more into servers than just virtual servers?
Tough to say as EMC can not afford to have its sales force lose focus on its traditional core products while ramping up other business, however, the EMC direct and partner teams want and need to keep up account control which means gaining market share and footprint in those accounts. This also means EMC needs to find ways to take cost out of the sales and marketing process where possible to streamline which perhaps brining VCE will help do.
Will this perhaps give the EMC direct and partner sales teams a new carrot or incentive to promote converged and hyper-converged at the cost of other competitors or incumbents? Perhaps, lets see what happens in the coming weeks.
What does this all mean?
In a nut shell, IMHO EMC is doing a couple of things here one of which is cleaning up some ownership in JVs to give it self more control, as well as options for doing other business transactions (mergers and acquisitions (M&A), sales or divestiture’s, new joint initiatives, etc). Then there is streamline its business from decision-making to quickly respond to new opportunities as well as routes to markets and other activities (e.g. removing complexity and cost vs. simply cutting cost).
Does this signal the prelude to something else? Perhaps, we know that EMC has made a statement of direction about hyper-converged which with VCE now more under EMC control, perhaps we will see more options from under the VCE umbrella both for lower-end and entry SMB as well as SME and large enterprise organizations.
What about the activist investors?
They are going to make noise as long as they can continue to make more money or get what they want. Publicly I would be shocked if the activist investors were not making statements that EMC should be selling assets not buying or investing.
On the other hand, any smart investor, financial or other analyst should see though the fog of what this relatively simple transaction means in terms of EMC getting further control of its future.
Of course the question will stay does EMC remain in control of its current federation of EMC, VMware, Pivotal, RSA along each of their respective holdings, does EMC doe a block buster merger, divestiture or acquisition?
Take a step back, look at the big picture!
Some things to keep an eye on:
Will this move help streamline decision-making enabling new solutions to be brought to market and customers quicker?
While there is a VMware focus, don’t forget about the long-running decades old relationship with Microsoft and how that plays into the equation
Watch for what EMC releases with their hyper-converged solution as well as where it is focused, not to mention how sold
Also watch the EMC and Lenovo join initiative, both for the Iomega storage activity as well as what EMC and Lenovo do with and for servers
Speaking of Lenovo, unless I missed something as of the time of writing this, have you noticed that Lenovo is not yet part of the VMware EVO:Rail initiative?
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Seagate has shipped over 10 Million storage HHDD’s, is that a lot?
Recently Seagate made an announcement that they have shipped over 10 million Hybrid Hard Disk Drives (HHDD) also known as Solid State Hybrid Drives (SSHD) over that past few years. Disclosure Seagate has been a StorageIO client.
I know where some of those desktop class HHDD’s including Momentus XTs ended up as I bought some of the 500GB and 750GB models via Amazon and have them in various systems. Likewise I have installed in VMware servers the newer generation of enterprise class SSHD’s which Seagate now refers to as Turbo models as companions to my older HHDD’s
What is a HHDD or SSHD?
The HHDD’s continue to evolve from initially accelerating reads to now being capable of speeding up write operations across different families (desktop/mobile, workstation and enterprise). What makes a HHDD or SSHD is that as their name implies, they are a hybrid combing a traditional spinning magnetic Hard Disk Drive (HDD) along with flash SSD storage. The flash persistent memory is in addition to the DRAM or non-persistent memory typically found on HDDs used as a cache buffer. These HHDDs or SSHDs are self-contained in that the flash are built-in to the actual drive as part of its internal electronics circuit board (controller). This means that the drives should be transparent to the operating systems or hypervisors on servers or storage controllers without need for special adapters, controller cards or drivers. In addition, there is no extra software needed to automated tiering or movement between the flash on the HHDD or SSHD and its internal HDD, its all self-contained managed by the drives firmware (e.g. software).
There were a few reasons: The hybrid drive can be viewed as an evolution of the DRAM cache already incorporated into nearly all HDDs today.
Replacing or augmenting an expensive DRAM cache with a slower, cheaper NAND cache makes a lot of sense.
An SSHD performs much better than a standard HDD at a lower price than an SSD. In fact, an SSD of the same capacity as today’s average HDD would cost about an order of magnitude more than the HDD. The beauty of an SSHD is that it provides near-SSD performance at a near-HDD price. This could have been a very compelling sales proposition had it been promoted in a way that was understood and embraced by end users.
Some expected for Seagate to include this technology into all HDDs and not to try to continue using it as a differentiator between different Seagate product lines. The company could have taken either of two approaches: To use hybrid technology to break apart two product lines – standard HDDs and higher-margin hybrid HDDs, or to merge hybrid technology into all Seagate HDDs to differentiate Seagate HDDs from competitors’ products, allowing Seagate to take slightly higher margins on all HDDs. Seagate chose the first path.
The net result is shipments of 10 million units since its 2010 introduction, for an average of 2.5 million per year, out of a total annual HDD shipments of around 500 million units, or one half of one percent.
In his post, Jim raises some good points including that HHDD’s and SSHD’s are still a fraction of the overall HDD’s shipped on an annual basis. However IMHO the annual growth rate has not been a flat average of 2.5 million, rather starting at a lower rate and then increasing year over year. For example Seagate issued a press release back in summer 2011 that they had shipped a million HHDD’s a year after their release. Also keep in mind that those HHDD’s were focused on desktop workstations and in particular, at Gamers among others.
The early HHDD’s such as the Momentus XTs that I was using starting in June 2010 only had read acceleration which was better than HDD’s, however did not help out on writes. Over the past couple of years there have been enhancements to the HHDD’s including the newer generation also known as SSHD’s or Turbo drives as Seagate now calls them. These newer drives include write acceleration as well as with models for mobile/laptop, workstation and enterprise class including higher-performance and high-capacity versions. Thus my estimates or analysis has the growth on an accelerating curve vs. linear growth rate (e.g. average of 2.5 million units per year).
Units shipped per year
Running total units shipped
2010-2011
1.0 Million
1.0 Million
2011-2012
1.25 Million (est.)
2.25 Million (est.)
2012-2013
2.75 Million (est.)
5.0 Million (est.)
2013-2014
5.0 Million (est)
10.0 Million
StorageIO estimates on HHDD/SSHD units shipped based on Seagate announcements
However IMHO there is more to the story beyond numbers of HHDD/SSHD shipped or if they are accelerating in deployment or growing at an average rate. Some of those perspectives are in my comments over on Jim Handy’s site with an excerpt below.
In talking with IT professionals (e.g. what the vendors/industry calls users/customers) they are generally not aware that these devices exist, or if they are aware of them, they are only aware of what was available in the past (e.g. the consumer class read optimized versions). I do talk with some who are aware of the newer generation devices however their comments are usually tied to lack of system integrator (SI) or vendor/OEM support, or sole source. Also there was a focus on promoting the HHDD’s to “gamers” or other power users as opposed to broader marketing efforts. Also most of these IT people are not aware of the newer generation of SSHD or what Seagate is now calling “Turbo” drives.
When talking with VAR’s, there is a similar reaction which is discussion about lack of support for HHDD’s or SSHD’s from the SI/vendor OEMs, or single source supply concerns. Also a common reaction is lack of awareness around current generation of SSHD’s (e.g. those that do write optimization, as well as enterprise class versions).
When talking with vendors/OEMs, there is a general lack of awareness of the newer enterprise class SSHD’s/HHDD’s that do write acceleration, sometimes there is concern of how this would disrupt their “hybrid” SSD + HDD or tiering marketing stories/strategies, as well as comments about single source suppliers. Have also heard comments to the effect of concerns about how long or committed are the drive manufactures going to be focused on SSHD/HHDD, or is this just a gap filler for now.
Not surprisingly when I talk with industry pundits, influencers, amplifiers (e.g. analyst, media, consultants, blogalysts) there is a reflection of all the above which is lack of awareness of what is available (not to mention lack of experience) vs. repeating what has been heard or read about in the past.
IMHO while there are some technology hurdles, the biggest issue and challenge is that of some basic marketing and business development to generate awareness with the industry (e.g. pundits), vendors/OEMs, VAR’s, and IT customers, that is of course assuming SSHD/HHDD are here to stay and not just a passing fad…
What about SSHD and HHDD performance on reads and writes?
What about the performance of today’s HHDD’s and SSHD’s, particular those that can accelerate writes as well as reads?
Enterprise Turbo SSHD read and write performance (Exchange Email)
What about the performance of today’s HHDD’s and SSHD’s, particular those that can accelerate writes as well as reads?
Enterprise Turbo SSHD read and write performance (TPC-B database)
Enterprise Turbo SSHD read and write performance (TPC-E database)
Additional details and information about HHDD/SSHD or as Seagate now refers to them Turbo drives can be found in two StorageIO Industry Trends Perspective White Papers (located here and another here).
I continue to be bullish on hybrid storage solutions from cloud, to storage systems as well as hybrid-storage devices. However like many technology just because something makes sense or is interesting does not mean its a near-term or long-term winner. My main concern with SSHD and HHDD is if the manufactures such as Seagate and WD are serious about making them a standard feature in all drives, or simply as a near-term stop-gap solution.
What’s your take or experience with using HHDD and/or SSHDs?
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Docker for Smarties (e.g. non-dummies) from VMworld 2014
In this Industry Trends Perspectives video pod cast episode (On YouTube) I had a chance to visit with Nathan LeClaire of docker.com at the recent VMworld 2014 in San Francisco for a quick overview of docker and containers are about, what you need to know and where to find more information. Check out this StorageIO Industry Trends Perspective episode "Docker for Smarties" aka not for dummies via YouTube by clicking here or on the image below.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
CompTIA needs input for their Storage+ certification, can you help?
The CompTIA folks are looking for some comments and feedback from those who are involved with data storage in various ways as part of planning for their upcoming enhancements to the Storage+ certification testing.
As a point of disclosure, I am member of the CompTIA Storage+ certification advisory committee (CAC), however I don’t get paid or receive any other type of renumeration for contributing my time to give them feedback and guidance other than a thank, Atta boy for giving back and playing it forward to help others in the IT community similar to what my predecessors did.
I have been asked to pass this along to others (e.g. you or who ever forwards it on to you).
Please take a few moments and feel free to share with others this link here to the survey for CompTIA Storage+.
What they are looking for is to validate the exam blueprint generated from a recent Job Task Analysis (JTA) process.
In other words, does the certification exam show real-world relevance to what you and your associates may be doing involved with data storage.
This is opposed to being aligned with those whose’s job it is to create test questions and may not understand what it is you the IT pro involved with storage does or does not do.
If you have ever taken a certification exam test and scratched your head or wondered out why some questions that seem to lack real-world relevance were included, vs. ones of practical on-the-job experience were missing, here’s your chance to give feedback.
Note that you will not be rewarded with an Amex or Amazon gift card, Starbucks or Dunkin Donuts certificates, free software download or some other incentive to play and win, however if you take the survey let me know and will be sure to tweet you an Atta boy or Atta girl! However they are giving away a free T-Shirt to every 10 survey takers.
Btw, if you really need something for free, send me a note (I’m not that difficult to find) as I have some free copies of Resilient Storage Networking (RSN): Designing Flexible Scalable Data Infrastructures (Elsevier) you simply pay shopping and handling. RSN can be used to help prepare you for various storage testing as well as other day-to-day activities.
CompTIA is looking for survey takers who have some hands-on experience or involved with data storage (e.g. can you spell SAN, NAS, Disk or SSD and work with them hands-on then you are a candidate ;).
Welcome to the CompTIA Storage+ Certification Job Task Analysis (JTA) Survey
Your input will help CompTIA evaluate which test objectives are most important to include in the CompTIA Storage+ Certification Exam
Your responses are completely confidential.
The results will only be viewed in the aggregate.
Here is what (and whom) CompTIA is looking for feedback from:
Has at least 12 to 18 months of experience with storage-related technologies.
Makes recommendations and decisions regarding storage configuration.
Facilitates data security and data integrity.
Supports a multiplatform and multiprotocol storage environment with little assistance.
Has basic knowledge of cloud technologies and object storage concepts.
As a small token of CompTIA appreciation for your participation, they will provide an official CompTIA T-shirt to every tenth (1 of every 10) person who completes this survey. Go here for official rules.
Click here to complete the CompTIA Storage+ survey
Contact CompTIA with any survey issues, research@comptia.org
What say you, take a few minutes like I did and give some feedback, you will not be on the hook for anything, and if you do get spammed by the CompTIA folks, let me know and I in turn will spam them back for spamming you as well as me.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Welcome to the August 2014 edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization, software defined and data infrastructure topics. This past week I along with around 22,000 others attended VMworld 2014 in San Francisco. For those of you in Europe, VMworld Barcelona is October 14-16 2014 with registration and more information found here. Watch for more post VMworld coverage in upcoming newsletters, articles, posts along with other industry trend topics. Enjoy this edition of the StorageIO Update newsletter and look forward to catching up with you live or online while out and about this fall.
Greg Schulz @StorageIO
August 2014 Industry trend and perspectives
The following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes.
StorageIO comments and perspectives in the news
Virtual Desktop Infrastructures (VDI) remains a popular industry and IT customer topic, not to mention being one of the favorite themes of Solid State Device (SSD) vendors. SSD component and system solution vendors along with their supporters love VDI as the by-product of aggregation (e.g. consolidation) which applies to VDI is aggravation. Aggravation is the result of increased storage I/O performance (IOP’s, bandwidth, response time) from consolidating the various desktops. It should not be a surprise that some of the biggest fans encouraging organizations to adopt VDI are the SSD vendors. Read some of my comments and perspectives on VDI here at FedTech Magazine.
Speaking of virtualizing the data center, software defined data centers (SDDC) along with software defined networking (SDN) and software defined storage (SDS) remain popular including some software defined marketing (SDM). Here are some of my comments and perspectives moving beyond the hype of SDDC.
Recently the Fibre Channel Industry Association (FCIA) who works with the T11 standards body of both legacy or classic Fibre Channel (FC) as well as newer FC over Ethernet (FCoE) made some announcements. These announcements including enhancements such as Fibre Channel Back Bone version 6 (FC-BB-6) among others. Both FC and FCoE are alive and doing well, granted one has been around longer (FC) and can be seen at its plateau while the other (FCoE) continues to evolve and grow in adoption. In some ways, FCoE is in a similar role today to where FC was in the late 90s and early 2000s ironically facing some common fud. You can read my comments here as part of a quote in support of the announcement , along with my more industry trend perspectives in this blog post here.
Buyers guides are popular with both vendors, VAR’s as well as IT organizations (e.g. customers) following are some of my comments and industry trend perspectives appearing in Enterprise Storage Forum. Here are perspectives on buyers guides for Enterprise File Sync and Share (EFSS), Unified Data Storage and Object Storage. EMC has come under pressure as mentioned in earlier StorageIO update newsletters to increase its shareholder benefit including spin-off of VMware. Here are some of my comments and perspectives that appeared in CruxialCIO. Read more industry trends perspectives comments on the StorageIO news page.
StorageIO video and audio pod casts
StorageIO audio podcasts are also available via and at StorageIO.tv
StorageIOblog posts and perspectives
Despite being declared dead, traditional or classic Fibre Channel (FC) along with FC over Ethernet (FCoE) continues to evolve with FC-BB-6, read more here.
VMworld 2014 took place this past week and included announcements about EVO:Rack and Rail (more on this in a future edition). You can get started learning about EVO:Rack and RAIL at Duncan Epping (aka @DuncanYB) Yellow Bricks site. VMware Virtual SAN (VSAN) is at the heart of EVO which you can read an overview here in this earlier StorageIO update newsletter (March 2014).
VMware VSAN example
Also watch for some extra content that I’m working on including some video podcasts articles and blog posts from my trip to VMworld 2014. However one of the themes in the background of VMworld 2014 is the current beta of VMware vSphere V6 along with Virtual Volumes aka VVOL’s. The following are a couple of my recent posts including primer overview of VVOL’s along with a poll you can cast your vote. Check out Are VMware VVOL’s in your virtual server and storage I/O future? and VMware VVOL’s and storage I/O fundamentals (Part 1) along with (Part 2).
StorageIO events and activities
The StorageIO calendar continues to evolve including several new events being added for September and well into the fall with more in the works including upcoming Dutch European sessions the week of October 6th in Nijkerk Holland (learn more here). The following are some upcoming September events. These include live in-person seminars, conferences, keynote and speaking activities as well as on-line webinars, twitter chats, Google+ hangouts among others.
Sep 25 2014
MSP CMG
Server and StorageIO SSD industry trends perspectives and tips
TBA 9:30AM CT
Sep 18 2014
InfoWorld
Hybrid Storage In Government
Webinar 2:30PM ET
Sep 18 2014
Converged Storage and Storage Convergence
Webinar 9AM PT
Sep 17 2014
Data Center Convergence
Webinar 1PM PT
Sep 16 2014
Critical Infrastructure and Disaster Recovery
Webinar Noon PT
Sep 16 2014
Starwind Software
Software Defined Storage and Virtual SAN for Microsoft environments
Webinar 1PM CT
Sep 16 2014
Dell BackupU
Exploring the Data Protection Toolbox – Data and Application Replication
Google+ 9AM PT
Sep 2 2014
Dell BackupU
Exploring the Data Protection Toolbox – Data and Application Replication
Online Webinar 11AM PT
Note: Dates, times, venues and subject contents subject to change, refer to events page for current status
Click here to view other upcoming along with earlier event activities. Watch for more 2014 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, software defined, big data, little data, cloud and object storage, performance and management trends among others.
Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events.
Server and StorageIO Technology Tips and Tools
In addition to the industry trends and perspectives comments in the news mentioned above, along with the StorageIO blog posts, the following are some of my recent articles and tips that have appeared in various industry venues.
Over at the new Storage Acceleration site I have a couple of pieces, the first is What, When, Why & How to Accelerate Storage and the other is Tips for Measuring Your Storage Acceleration. Meanwhile over at Search Storage I have a piece covering What is the difference between a storage snapshot and a clone? and at Search Cloud Storage some tips about What’s most important to know about my cloud privacy policy?. Also with Software Defined in the news and a popular industry topic, I have a piece over at Enterprise Storage Forum looking at Has Software Defined Jumped the Shark? Check out these and others on the StorageIO tips and articles page.
StorageIO Update Newsletter Archives
Click here to view earlier StorageIO Update newsletters (HTML and PDF versions) at www.storageio.com/newsletter. Subscribe to this newsletter (and pass it along) by clicking here (Via Secure Campaigner site). View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
VMware VVOL’s and storage I/O fundamentals (Part II)
Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g.VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).
First however lets be clear that while VMware uses terms including object and object storage in the context of VVOL’s, its not the same as some other object storage solutions. Learn more about object storage here at www.objectstoragecenter.com
Are VVOL’s accessed like other object storage (e.g. S3)?
No, VVOL’s are accessed via the VMware software and associated API’s that are supported by various storage providers. VVOL’s are not LUN’s like regular block (e.g. DAS or SAN) storage that use SAS, iSCSI, FC, FCoE, IBA/SRP, nor are they NAS volumes like NFS mount points. Likewise VVOL’s are not accessed using any of the various object storage access methods mentioned above (e.g. AWS S3, Rest, CDMI, etc) instead they are an application specific implementation. For some of you this approach of an applications specific or unique storage access method may be new, perhaps revolutionary, otoh, some of you might be having a DejaVu moment right about now.
VVOL is not a LUN in the context of what you may know and like (or hate, even if you have never worked with them), likewise it is not a NAS volume like you know (or have heard of), neither are they objects in the context of what you might have seen or heard such as S3 among others.
Keep in mind that what makes up a VMware virtual machine are the VMK, VMDK and some other files (shown in the figure below), and if enough information is known about where those blocks of data are or can be found, they can be worked upon. Also keep in mind that at least near-term, block is the lowest common denominator that all file systems and object repositories get built-up.
VMware ESXi storage I/O, IOPS and data store basics
Here is the thing, while VVOL’s will be accessible via a block interface such as iSCSI, FC or FCoE or for that matter, over Ethernet based IP using NFS. Think of these storage interfaces and access mechanisms as the general transport for how vSphere ESXi will communicate with the storage system (e.g. their data path) under vCenter management.
What is happening inside the storage system that will be presented back to ESXi will be different than a normal SCSI LUN contents and only understood by VMware hypervisor. ESXi will still tell the storage system what it wants to do including moving blocks of data. The storage system however will have more insight and awareness into the context of what those blocks of data mean. This is how the storage systems will be able to more closely integrate snapshots, replication, cloning and other functions by having awareness into which data to move, as opposed to moving or working with an entire LUN where a VMDK may live. Keep in mind that the storage system will still function as it normally would, just think of VVOL as another or new personality and access mechanism used for VMware to communicate and manage storage.
VMware VVOL concepts (in general) with VMDK being pushed down into the storage system
Think in terms of the iSCSI (or FC or something else) for block or NFS for NAS as being the addressing mechanism to communicate between ESXi and the storage array, except instead of traditional SCSI LUN access and mapping, more work and insight is pushed down into the array. Also keep in mind that with a LUN, it is simply an address from what to use Logical Block Numbers or Logical Block Addresses. In the case of a storage array, it in turn manages placement of data on SSD or HDDs in turn using blocks aka LBA/LBN’s In other words, a host that does not speak VVOL would get an error if trying to use a LUN or target on a storage system that is a VVOL, that’s assuming it is not masked or hidden ;).
What’s the Storage Provider (SP)
The Storage Provider aka SP is created by the, well, the provider of the storage system or appliance leveraging a VMware API (hint, sign up for the beta and there is an SDK). Simply put, the SP is a two-way communication mechanism leveraging VASA for reporting information, configuration and other insight up to VMware ESXi hypervisor, vCenter and other management tools. In addition the storage provider receives VASA configuration information from VMware about how to configure the storage system (e.g. storage containers). Keep in mind that the SP is the out of band management interface between the storage system supporting and presenting VVOL’s and VMware hypervisors.
What’s the Storage Container (SC)
This is a storage pool created on the storage array or appliance (e.g. VMware vCenter works with array and storage provider (SP) to create) in place of using a normal LUN. With a SP and PE, the storage container becomes visible to ESXi hosts, VVOL’s can be created in the storage container until it runs out of space. Also note that the storage container takes on the storage profile assigned to it which are inherited by the VVOLs in it. This is in place of presenting LUN’s to ESXi that you can then create VMFS data stores (or use as raw) and then carve storage to VMs.
Protocol endpoint (PE)
The PE provides visibility for the VMware hypervisor to see and access VMDK’s and other objects (e.g. .vmx, swap, etc) stored in VVOL’s. The protocol endpoint (PE) manages or directs I/O received from the VM enabling scaling across many virtual volumes leveraging multipathing of the PE (inherited by the VVOL’s.). Note that for storage I/O operations, the PE is simply a pass thru mechanism and does not store the VMDK or other contents. If using iSCSI, FC, FCoE or other SAN interface, then the PE works on a LUN basis (again not actually storing data), and if using NAS NFS, then with a mount point. Key point is that the PE gets out-of-the-way.
There certainly are many more details to VVOL’s. that you can get a preview of in the beta, a well as via various demos, webinars, VMworld sessions as more becomes public. However for now, hope you found this quick overview on VVOL’s. of use, since VVOL’s. at the time of this writing are not yet released, you will need to wait for more detailed info, or join the beta or poke around the web (for now). Also if you have not seen the first part overview to this piece, check it out here as I give some more links to get you started to learn more about VVOL’s.
Keep an eye on and learn more about VVOL’s. at VMworld 2014 as well as in various other venues.
IMHO VVOL’s. are or will be in your future, however the question will be is there going to be a back to the future moment for some of you with VVOL’s.?
What VVOL questions, comments and concerns are in your future and on your mind?
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g.VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).
Some of you may already be participating in the VMware beta of VVOL involving one of the initial storage vendors also in the beta program.
Ok, now let’s go a bit deeper, however if you want some good music to listen to while reading this, check out @BruceRaveGoDeepMusic.Net and shows here.
Taking a step back, digging deeper into Storage I/O and VVOL’s fundamentals
Instead of a VM host accessing its virtual disk (aka VMDK) which is stored in a VMFS formatted data store (part of ESXi hypervisor) built on top of a SCSI LUN (e.g. SAS, SATA, iSCSI, Fibre Channel aka FC, FCoE aka FC over Ethernet, IBA/SRP, etc) or an NFS file system presented by a storage system (or appliance), VVOL’s push more functionality and visibility down into the storage system. VVOL’s shift more intelligence and work from the hypervisor down into the storage system. Instead of a storage system simply presenting a SCSI LUN or NFS mount point and having limited (coarse) to no visibility into how the underlying storage bits, bytes as well as blocks are being used, storage systems gain more awareness.
Keep in mind that even files and objects still get ultimately mapped to pages and blocks aka sectors even on nand flash-based SSD’s. However also keep an eye on some new technology such as the Seagate Kinetic drive that instead of responding to SCSI block based commands, leverage object API’s and associated software on servers. Read more about these emerging trends here and here at objectstoragecenter.com.
With a normal SCSI LUN the underlying storage system has no knowledge of how the upper level operating system, hypervisor, file system or application such as a database (doing raw IO) is allocating the pages or blocks of memory aka storage. It is up to the upper level storage and data management tools to map from objects and files to the corresponding extents, pages and logical block address (LBA) understood by the storage system. In the case of a NAS solution, there is a layer of abstractions placed over the underlying block storage handling file management and the associated file to LBA mapping activity.
Storage I/O and IOP basics and addressing: LBA’s and LBN’s
Getting back to VVOL, instead of simply presenting a LUN which is essentially a linear range of LBA’s (think of a big table or array) that the hypervisor then manages data placement and access, the storage system now gains insight into what LBA’s correspond to various entities such as a VMDK or VMX, log, clone, swap or other VMware objects. With this more insight, storage systems can now do native and more granular functions such as clone, replication, snapshot among others as opposed to simply working on a coarse LUN basis. The similar concepts extend over to NAS NFS based access. Granted, there are more to VVOL’s including ability to get the underlying storage system more closely integrated with the virtual machine, hypervisor and associated management including supported service manage and classes or categories of service across performance, availability, capacity, economics.
What about VVOL, VAAI and VASA?
VVOL’s are building from earlier VMware initiatives including VAAI and VASA. With VAAI, VMware hypervisor’s can off-load common functions to storage systems that support features such as copy, clone, zero copy among others like how a computer can off-load graphics processing to a graphics card if present.
VASA however provides a means for visibility, insight and awareness between the hypervisor and its associated management (e.g. vCenter etc) as well as the storage system. This includes storage systems being able to communicate and publish to VMware its capabilities for storage space capacity, availability, performance and configuration among other things.
With VVOL’s VASA gets leveraged for unidirectional (e.g. two-way) communication where VMware hypervisor and management tools can tell the storage system of things, configuration, activities to do among others. Hence why VASA is important to have in your VMware CASA.
What’s this object storage stuff?
VVOL’s are a form of object storage access in that they differ from traditional block (LUN’s) and files (NAS volumes/mount points). However, keep in mind that not all object storage are the same as there are object storage access and architectures.
Object Storage basics, generalities and block file relationships
Avoid making the mistake of when you hear object storage that means ANSI T10 (the folks that manage the SCSI command specifications) Object Storage Device (OSD) or something else. There are many different types of underlying object storage architectures some with block and file as well as object access front ends. Likewise there are many different types of object access that sit on top of object architectures as well as traditional storage system.
An example of how some object storage gets accessed (not VMware specific)
Also keep in mind that there are many different types of object access mechanism including HTTP Rest based, S3 (e.g. a common industry defacto standard based on Amazon Simple Storage Service), SNIA CDMI, SOAP, Torrent, XAM, JSON, XML, DICOM, IL7 just to name a few, not to mention various programmatic bindings or application specific implementations and API’s. Read more about object storage architectures, access and related topics, themes and trends at www.objecstoragecenter.com
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Are VMware VVOL’s in your virtual server and storage I/O future?
Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).
With VMworld 2014 just around the corner, for some of you the question is not if Virtual Volumes (VVOL’s) are in your future, rather when, where, how and with what.
What this means is that for some hands on beta testing is already occurring or will be soon, while for others that might be around the corner or down the road.
Some of you may already be participating in the VMware beta of VVOL involving one of the first storage vendors also in the beta program.
On the other hand, some of you may not be in VMware centric environments and thus VVOL’s may not yet be in your vocabulary.
How do you know if VVOL are in your future if you don’t know what they are?
First, to be clear, as of the time this was written VMware VVOL’s are not released and only in beta as well as having been covered in earlier VMworld’s. Consequently what you are going to read here is based on what VVOL material has already been made public in various venues including earlier VMworld’s and VMware blogs among other places.
The quick synopsis of VMware VVOL’s overview:
Higher level of abstraction of storage vs. traditional SCSI LUN’s or NAS NFS mount points
Tighter level of integration and awareness between VMware hypervisors and storage systems
Simplified management for storage and virtualization administrators
Removing complexity to support increased scaling
Enable automation and service managed storage aka software defined storage management
VVOL considerations and your future
As mentioned, as of this writing, VVOL’s are still a future item granted they exist in beta.
For those of you in VMware environments, now is the time to add VVOL to your vocabulary which might mean simply taking the time to read a piece like this, or digging deeper into the theories of operations, configuration, usage, hints and tips, tutorials along with vendor specific implementations.
Explore your options, and ask yourself, do you want VVOL or do you need it
What support does your current vendor(s) have for VVOL or what is their statement of direction (SOD) which you might have to get from them under NDA.
This means that there will be some first vendors with some of their products supporting VVOL’s with more vendors and products following (hence watch for many statements of direction announcements).
Speaking of vendors, watch for a growing list of vendors to announce their current or plans for supporting VVOL’s, not to mention watch some of them jump up and down like Donkey in Shrek saying "oh oh pick me pick me".
When you ask a vendor if they support VVOL’s, move beyond the simple yes or no, ask which of their specific products, it is a block (e.g. iSCSI) or NAS file (e.g. NFS) based and other caveats or configuration options.
Watch for more information about VVOL’s in the weeks and months to come both from VMware along with from their storage provider partners.
How will VVOL impact your organizations best practices, policies, workflow’s including who does what, along with associated responsibilities.
Also check out this good VMware blog via Cormac Hogan (@CormacJHogan) that includes a video demo, granted its from 2012, however some of this stuff actually does take time and thus this is very timely. Speaking of VMware, Duncan Epping (aka @DuncanYB) at his Yellow-Bricks site has some good posts to check out as well with links to others including this here. Also check out the various VVOL related sessions at VMworld as well as the many existing, and soon to be many more blogs, articles and videos you can find via Google. And if you need a refresher, Why VASA is important to have in your VMware CASA.
Of course keep an eye here or whichever venue you happen to read this for future follow-up and companion posts, and if you have not done so, sign up for the beta here as there are lots of good material including SDKs, configuration guides and more.
Hope you found this quick overview on VVOL’s of use, since VVOL’s at the time of this writing are not yet released, you will need to wait for more detailed info, or join the beta or poke around the web (for now).
Keep an eye on and learn more about VVOL’s at VMworld 2014 as well as in various other venues.
IMHO VVOL’s are or will be in your future, however the question will be is there going to be a back to the future moment for some of you with VVOL’s?
Also what VVOL questions, comments and concerns are in your future and on your mind?
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Despite being declared dead, Fibre Channel continues to evolve with FC-BB-6
Like many technologies that have been around for more than a decade or two, they often get declared dead when something new appears and Fibre Channel (FC) for networking with your servers and storage falls into that category. It seems like just yesterday when iSCSI was appearing on the storage networking scene in the early 2000s that FC was declared dead yet it remains and continues to evolve including moving over Ethernet with FC over Ethernet (FCoE).
Recently the Fibre Channel Industry Association (FCIA) made an announcement on the continued development and enhancements including FC-BB-6 that applies to both "classic" or "legacy" FC as well as the newer and emerging FCoE implementations. FCIA is not alone in the FCIA activity as they are as the name implies the industry consortium that works with the T11 standards folks. T11 is a Technical Committee of the International Committee on Information Technology Standards (INCITS, pronounced "insights").
Keep in mind that a couple of pieces to Fibre Channel which are the upper levels and lower level transports.
With FCoE, the upper level portions get mapped natively on Ethernet without having to map on top of IP as happens with distance extension using FCIP.
Likewise FCoE is more than simply mapping one of the FC upper level protocols (ULPs) such as the SCSI command set (aka SCSI_FCP) on IP (e.g. iSCSI). Think of ULPs almost in a way as a guest that gets transported or carried across the network, however lets also be careful not to play the software defined network (SDN) or virtual network, network virtualization or IO virtualization (IOV) card, or at least yet, we will leave that up to some creative marketers ;).
At the heart of the Fibre Channel beyond the cable and encoding scheme are a set of protocols, command sets and one in particular is FC Backbone now in its 6th version (read more here at the T11 site, or here at the SNIA site).
VN2VN connectivity support enabling direct point to point virtual links (not to be confused with point to point physical cabling) between nodes in an FCoE network simplifying configurations for smaller SAN networks where zoning might not be needed (e.g. remove complexity and cost).
Support for Domain ID scalability including more efficient use by FCoE fabrics enabling large scalability of converged SANs. Also keep an eye on the emerging T11 FC-SW-6 distributed switch architecture for implementation over Ethernet in final stages of development.
"Fibre Channel is a proven protocol for networked data center storage that just got better," said Greg Schulz, founder StorageIO. "The FC-BB-6 standard helps to unlock the full potential of the Fibre Channel protocol that can be implemented on traditional Fibre Channel as well as via Ethernet based networks. This means FC-BB-6 enabled Fibre Channel protocol based networks give flexibility, scalability and secure high-performance resilient storage networks to be implemented."
Both "classic" or "legacy" Fibre Channel based cabling and networking are still alive with a road map that you can view here.
However FCoE also continues to mature and evolve and in some ways, FC-BB-6 and its associated technologies and capabilities can be seen as the bridge between the past and the future. Thus while the role of both FC and FCoE along with other ways of of networking with your servers and storage continue to evolve, so to does the technology. Also keep in mind that not everything is the same in the data center or information factory which is why we have different types of server, storage and I/O networks to address different needs, requirements and preferences.
Additional reading and viewing on FC, FCoE and storage networking::
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Server and StorageIO Update newsletter – July 2014
Welcome to the July 2014 edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics. For some of you it is mid summer (e.g. in the northern hemisphere) while for others it mid-winter (southern hemisphere). Here in the Stillwater MN area it is mid-summer which means enjoying the warm outdoor weather as well as getting ready for the busy late summer and early fall 2014 schedule of events including VMworld among others. Starting in this edition there are a couple of new and expanded sections including Technology tips and tools and the return of Just For Fun.
Greg Schulz @StorageIO
Lets jump into this mid-summer, or for some of you mid-winter edition of the StorageIO Update newsletter.
Industry and Technology Updates
Following up from our June 2014 Newsletter that included coverage of NetApp and Avago selling its newly acquired (via LSI acquisition) flash storage division to Seagate, along with other activity, here are some current industry activities. From a flash memory and solid state device (SSD) perspective, the flash memory summit (FMS) is occurring in Santa Clara the week of August 5, 2014. Having insight under NDA into some of the many announcements as well as other things occurring, keep an eye out for various news from the FMS event.
EMC MegaLaunch and MegaActivist Investor
In addition to their recent MegaLaunch series of product announcements and updates, EMC is also in the news as it comes under pressure from activist investor Hedge fund Elliot Management Corp to spin-off VMware to increase shareholder value. What would a spin-off of VMware mean for customers of the EMC federation of EMC core technologies, VMware and Pivotal labs if the activists get their way? Here are some additional comments and perspectives via CruxialCIO. Click here to view the recent (July 23, 2014) earnings announcement for a summary of how EMC is doing in the market and financially.
Speaking of EMC MegaLaunch on July 8, 2014, EMC also announced enhancements and new models of their Isilon scale out storage, new VMAX3 models with embedded virtualization and other enhancements, XtremeIO 3.0 and new models among other enhancements. EMC also announced the general availability of some previously announced at EMCworld (May 2014) solutions including Elastic Cloud Storage (ECS) and VIPR 2.0 along with SRM 2.0 among other items. For those not familiar with EMC ViPR Software Defined Storage Management you can read more here, here, here and here.
What’s in the works?
Several projects and things are in the works that will show themselves in the coming weeks or months if not sooner. Some of which are more proof points coming out of the StorageIO labs involving software defined, converged, cloud, virtual, SSD, cache software, data protection and more.
Speaking of Software Defined, join me for a free Webinar on August 7 Hardware agnostic Virtual SAN for VMware ESXi Free (sponsored by Starwind Software). Other upcoming webinars include BackupU Summer Semester series (Sponsored by Dell Software) where we continue Exploring the Data Protection Toolbox. August also means VMworld in San Francisco so see you there. Check out the activities calendar below and at our main website to learn about these and other events.
Watch for more StorageIO posts, commentary, perspectives, presentations, webinars, tips and events on information and data infrastructure topics, themes and trends. Data Infrastructure topics include among others cloud, virtual, legacy server, storage I/O networking, data protection, hardware and software.
Enjoy this edition of the StorageIO Update newsletter and look forward to catching up with you live or online while out and about this summer.
StorageIO comments and perspectives in the news
The following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes.
NetworkComputing: Comments on Data Backup: Beyond Band-Aids StorageNewsletter: Comments on Unified Storage Appliance Buying Guide Forbes: Comments on Big Data and Enterprise Information Management Toms Hardware: Comments on Server SAN: Demystifying Today’s Newest Storage Buzzword CruxialCIO: Comments on EMC Bridges Cloud, On-Premise Storage With TwinStrata Buy ComputerWeekly: Comments on Backup vs archive: Can they be merged? CruxialCIO: Comments on EMC under pressure to spin-off VMware EnterpriseStorageForum: Comments on Unified Storage and buyers guide tips
StorageIO video and audio pod casts
StorageIO audio pod casts are also available via and at StorageIO.tv
Remember to check out our objectstoragecenter.com page where you will find a growing collection of information and links on cloud and object storage themes, technologies and trends from various sources.
If you are interested in data protection including Backup/Restore, BC, DR, BR and Archiving along with associated technologies, tools, techniques and trends visit our storageioblog.com/data-protection-diaries-main/ page.
StorageIO events and activities
The StorageIO calendar continues to evolve including several new events being added for August and well into the fall with more in the works. Here are some recent and upcoming activities including live in-person seminars, conferences, keynote and speaking activities as well as on-line webinars, twitter chats, Google+ hangouts among others.
Exploring the Data Protection Toolbox – The ABCDs of DFR (Data Footprint Reduction), part II
Online Webinar 11AM PT
Click here to view other upcoming along with earlier event activities. Watch for more 2014 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, big data, little data, cloud and object storage, performance and management trends among others.
Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events.
Wrapping up this edition of the StorageIO Update newsletter is the return of the Just for fun and on a lighter note section where we share something non IT related. In this edition how about summertime backyard home video taken a few weeks ago? Check out this video of a black bear and her two cubs walking in, well, my backyard. First you will see Big Mama Bear, then Yogi Jr. that appear from the left, followed by baby Boo Boo also from the left.
Video courtesy of KarenofArcola – Click on image to view
StorageIO Update Newsletter Archives
Click here to view earlier StorageIO Update newsletters (HTML and PDF versions) at www.storageio.com/newsletter. Subscribe to this newsletter (and pass it along) by clicking here (Via Secure Campaigner site). View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved