Server Storage I/O Cables Connectors Chargers & other Geek Gifts

Server Storage I/O Cables Connectors Chargers & other Geek Gifts

server storage I/O trends

This is part one of a two part series for what to get a geek for a gift, read part two here.

It is that time of the year when annual predictions are made for the upcoming year, including those that will be repeated next year or that were also made last year.

It’s also the time of the year to get various projects wrapped up, line up new activities, get the book-keeping things ready for year-end processing and taxes, as well as other things.

It’s also that time of the year to do some budget and project planning including upgrades, replacements, enhancements while balancing an over-subscribed holiday party schedule some of you may have.

Lets not forget getting ready for vacations, perhaps time off from work with some time upgrading your home lab or other projects.

Then there are the gift lists or trying to figure out what to get that difficult to shop for person particular geek’s who may have everything, or want the latest and greatest that others have, or something their peers don’t have yet.

Sure I have a DJI Phantom II on my wish list, however also have other things on my needs list (e.g. what I really need and want vs. what would be fun to wish for).

DJI Phantom helicopter drone
Image via DJI.com, click on image to learn more and compare models

So here are some things for the geek or may have everything or is up on having the latest and greatest, yet forgot or didn’t know about some of these things.

Not to mention some of these might seem really simple and low-cost, think of them like a Lego block or erector set part where your imagination will be your boundary how to use them. Also, most if not all of these are budget friendly particular if you shop around.

Replace a CD/DVD with 4 x 2.5″ HDD’s or SSD’s

So you need to add some 2.5" SAS or SATA HDD’s, SSD’s, HHDD’s/SSHD’s to your server for supporting your VMware ESXi, Microsoft Hyper-V, KVM, Xen, OpenStack, Hadoop or legacy *nix or Windows environment or perhaps gaming system. Challenge is that you are out of disk drive bay slots and you want things neatly organized vs. a rat’s nest of cables hanging out of your system. No worries assuming your server has an empty media bay (e.g. those 5.25" slots where CDs/DVDs or really old HDD’s go), or if you can give up the CD/DVD, then use that bay and its power connector to add ones of these. This is a 4 x 2.5" SAS and SATA drive bay that has a common power connector (molex male) with each drive bay having its own SATA drive connection. By each drive having its own SATA connection you can map the drives to an on-board available SATA port attached to a SAS or SATA controller, or attach an available port on a RAID adapter to the ports using a cable such as small form factor (SFF) 8087 to SATA.

sas storage enclosuresas sata storage enclosure
(Left) Rear view with Molex power and SATA cables (Right) front view

I have a few of these in different systems and what I like about them is that they support different drive speeds, plus they will accept a SAS drive where many enclosures in this category only support SATA. Once you mount your 2.5" HDD or SSD using screws, you can hot swap (requires controller and OS support) the drives and move them between other similar enclosures as needed. The other thing I like is that there are front indicator lights as well as by each drive having its own separate connection, you can attach some of the drives to a RAID adapter while others connected to on-board SATA ports. Oh, and you can also have different speeds of drives as well.

Power connections

Depending on the type of your server, you may have Molex, SATA or some other type of power connections. You can use different power connection cables to go from one type (Molex) to another, create a connection for two devices, create an extension to reach hard to get to mounting locations.

Warning and disclosure note, keep in mind how much power you are drawing when attaching devices to not cause an electrical or fire hazard, follow manufactures instructions and specification doing so at your own risk! After all, Just like Clark Grizzwald in National Lampoon Christmas Vacation who found you could attach extension cord to splitters to splitters and fan-out to have many lights attached, you don’t want to cause a fire or blackout when you plug to many drives in.


National Lampoon Christmas Vacation

Measuring Power

Ok so you do not want to do a Clark Grizzwald (see above video) and overload a power circuit, or perhaps you simply want to know how many watts, amps or quality of your voltage is.

There are many types of power meters along with various prices, some even have interfaces where you can grab event data to correlate with server storage I/O networking performance to do things such as IOP’s per watt among other metrics. Speaking of IOP’s per watt, check out the SNIA Emerald site where they have some good tools including a benchmark script that uses Vdbench to drive hot band workload (e.g. basically kick the crap out of a storage system).

Back to power meters, I like the Kill A Watt series of meters as they give good info about amps, volts, power quality. I have these plugged into outlets so I can see how much power is being used by the battery backup units (BBU) aka UPS that also serve as power surge filters. If needed I can move these further downstream to watch the power intake of a specific server, storage, network or other device.

Kill A Watt Power meter

Standby and backup power

Electrical power surge strips should be a given or considered common sense, however what is or should be common sense should be repeated so that it remains common sense, you should be using power surge strips or other devices.

Standby, UPS and BBU

For most situations a good surge suppressor will cover short power transients.

APC power strips and battery backup
Image via APC and model similar to those that I have

For slightly longer power outages of a few seconds to minutes, that’s where battery backup up (BBU) units that also have surge suppression comes into play. There are many types, sizes with various features to meet your needs and budget. I have several of theses in a couple of different sizes not only for servers, storage and networking equipment (including some WiFi access points, routers, etc), I also have them for home things such as satellite DVR’s. However not everything needs to stay on while others simply need to stay on long-enough in order to shutdown manually or via automated power off sequences.

Alternate Power Generation

Generators are not just for the rich and famous or large data center, like other technologies they are available in different sizes, power capacity, fuel sources, manual or automated among other things.

kohler residential generator
Image via Kohler Power similar to model that I have

Note that even with a typical generator there will be a time gap from the time power goes off until the generator starts, stabilizes and you have good power. That’s where the BBU and UPS mentioned above comes into play to bridge those time gaps which in my cases is about 25-30 seconds. Btw, knowing how much power your technology is drawing using tools such as the Kill A Watt is part of the planning process to avoid surprises.

What about Solar Power

Yup, whether it is to fit in and be green, or simply to get some electrical power when or where it is not needed to charge a battery or power some device, these small solar power devices are very handy.

solar charger
Image via Amazon.com
solar battery charger
Image via Amazon.com

For example you can get or easily make an adapter to charge laptops, cell phones or even power them for normal use (check manufactures information on power usage, Amps and Voltage draws among other warnings to prevent fire and other things). Btw, not only are these handy for computer related things, they also work great for keeping batteries on my fishing boat charged so that I have my fish finder and other electronics, just saying.

Fire suppression

How about a new or updated smoke and fire detection alarm monitor, as well as fire extinguisher for the geek’s software defined hardware that runs on power (electrical or battery)?

The following is from the site Fire Extinguisher 101 where you can learn more about different types of suppression technologies.

Image via Fire Extinguisher 101
  • Class A extinguishers are for ordinary combustible materials such as paper, wood, cardboard, and most plastics. The numerical rating on these types of extinguishers indicates the amount of water it holds and the amount of fire it can extinguish. Geometric symbol (green triangle)
  • Class B fires involve flammable or combustible liquids such as gasoline, kerosene, grease and oil. The numerical rating for class B extinguishers indicates the approximate number of square feet of fire it can extinguish. Geometric symbol (red square)
  • Class C fires involve electrical equipment, such as appliances, wiring, circuit breakers and outlets. Never use water to extinguish class C fires – the risk of electrical shock is far too great! Class C extinguishers do not have a numerical rating. The C classification means the extinguishing agent is non-conductive. Geometric symbol (blue circle)
  • Class D fire extinguishers are commonly found in a chemical laboratory. They are for fires that involve combustible metals, such as magnesium, titanium, potassium and sodium. These types of extinguishers also have no numerical rating, nor are they given a multi-purpose rating – they are designed for class D fires only. Geometric symbol (Yellow Decagon)
  • Class K fire extinguishers are for fires that involve cooking oils, trans-fats, or fats in cooking appliances and are typically found in restaurant and cafeteria kitchens. Geometric symbol (black hexagon)

Wrap up for part I

This wraps up part I of what to get a geek V2014, continue reading part II here.

Ok, nuff said, for now…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II 2014 Server Storage I/O Geek Gift ideas

Part II 2014 Server Storage I/O Geek Gift ideas

server storage I/O trends

This is part two of a two part series for what to get a geek for a gift, read part one here.

KVM switch

Not to be confused with a software defined network (SDN) switch for the KVM virtualization hypervisor, how about the other KVM switch?

kvm switch
My KVM switch in use, looks like five servers are powered on.

If you have several servers or devices that need a Keyboard Video Mouse connection, or are using A/B box or other devices, why not combine using a KVM switch. I bought the Startech shown above from Amazon which works out to be under $40 a port (connection) meaning I do not have to have Keyboards, Video monitors or Mouse for each of those systems.

With my KVM shown above, I have used the easy setup to name each of the ports via the management software so that when a button is pressed, not only does the applicable screen appear, also a graphic text message overlay tell me which server is being displayed. This is handy for example as I have some servers that are identical (e.g. Lenovo TS140s) running VMware that a quick glance can help me verify I’m on the right one (e.g. without looking at the VMware host name or IP). This feature is also handy during power on self test (POST) when the servers physical or logical (e.g. VMware, Windows, Hyper-V, Ubuntu, Openstack, etc..) identity is known. Another thing I like about these is that on the KVM switch there is a single VGA type connector, while on the server end there is a VGA connector for attaching to the monitor port of the device, and a break out cable with USB for attaching to server to get Keyboard and Mouse.

Single drive shoe box

Usually things are in larger server or storage systems enclosures, however now and then there is the need to supply power to a HDD or SSD along with a USB or eSATA interface for attaching to a system. These are handy and versatile little aluminum enclosures.

single drive sata enclosuredisk enclosure

Note that you can now also find these types of cables that can do same or similar function for in side a server connection (check out this cable among others at Amazon)

USB-SATA cable

It would be easy to assume that everybody would have these by now particular since everybody (depending on who you listen to or what you read) has probably converted from a HDD to SSD. However for those who have not done an HDD to SSD, or simply a HDD to newer HDD conversion, or that have an older HDD (or SSD) lying around, these cables come in very handy. attach one end (e.g. the SATA end) to a HDD or SSD and the other to a USB port on a laptop, tablet or server. Caveat however with these is that they generally only have power (via USB) for a 2.5″ type drive so for a larger more power-hungry 3.5″ device, you would need a different powered cable, or small shoe box type enclosure.

eSATA cable
(Left) USB to SATA and (Right) eSATA to SATA cables

Mophie USB charger

There are many different types of mobile device chargers available along with multi-purpose cables. I like the Mophie which I received at an event from NetApp (Thanks NetApp) and the flexible connector I received from Dyn while at AWS re:Invent 2014 (Thanks Dyn, I’m also a Dyn customer fwiw).
power chargerpower cable
(Left) Mophie Power station and (Right) multi-connector cable

The Mohpie has USB connector so that you can charge it via a charging station or via a computer, as well as attach a USB to Apple or other device connector. There is also a small connector for attach to other devices. This is where the dandy Dyn device comes into play as it has a USB as well as Apple and many other common connectors as shown in the figure below. Google around and I’m sure you can find both for sale, or as giveaways or something similar.

SAS SATA Interposer

sas interposerserver storage power
(Left) SAS to SATA interposer (Right) Molex power with SATA connector to SAS

Note that the above are intended for passing a SAS signal from a device such as HDD or SSD to a SAS based controller that happens to have SATA mechanical or keyed interfaces such as with some servers. This means that the real controller needs to be SAS and the attached drives can be SATA or SAS keeping in mind that a SATA device can plug into a SAS controller however not vise versa. You can find the above at Amazon among other venues. Need a dual-lane SAS connector as an alternative to the one shown above on the right, then check this one out at Amazon.

Need to learn more about the many different facets of SAS and related technologies including how it coexists with iSCSI, Fibre Channel (FC), FCoE, InfiniBand and other interfaces, how about getting a free copy of SAS SANs for Dummies?

SAS SANS for dummies

There are also these for doing board level connections

esata connectorsata to esata cablesata male to male gender changer
Some additional SAS and SATA drive connectors

In the above on the left are a female to female SATA cable with a male to male SATA gender changer attached to be used for example between a storage device and the SATA connector port on a servers motherboard, HBA or RAID controller. In the middle are shown some SATA female to female cables, as well as a SATA to eSATA (external SATA) cable (middle), and on the right are some SATA Male to SATA Male gender changes also shown being used on the left in the above figures.

Internal Power cable / connectors

If you or your geek are doing things in the lab or other environment adding and reconfiguring devices such as some of those mentioned above (or below), sooner or later there will be the need to do something with power cables and connectors.

power meter
Various cables, adapters and extender

In the above figure are shown (top to bottom) a SATA male to molex, SATA female to SATA male and to its right SATA female to Molex. Below that are two SATA females to Molex, below that is a SATA male to dual Molex and on the bottom is a single SATA to dual SATA. Needless to say there are many other combinations of connectors as well as different genders (e.g. Male or Female) along with extenders. As mentioned above, pay attention to manufacturers recommend power draw and safety notices to prevent accidental electric shock or fire.

Intel Edison kit for IoT and IoD

Are you or your geek into the Internet of Things (IoT) or Internet of Devices (IoD) or other similar things and gadgets? Have you heard about Intel’s Edison breakout board for doing software development and attachment of various hardware things? Looking for something to move beyond a Raspberry Pi system?

Intel Edison boardIntel Edison kits
Images via Intel.com

Over the hills, through the woods WiFi

This past year I found Nanostation extended WiFi devices that solved a challenge (problem) which was how to get a secure WiFi signal up to a couple hundred yards through a thick forest between some hill’s.


Image via UBNT.com, check out their other models as well as resources for different deployments

The problem was it was to far and too many tree’s with leaves use a regular WiFi connection and too far to run cable if I did not need to. I found the solution by getting a pair of the Nanostation M2 putting them into bridge mode, then doing some alignment with their narrow beam antennas to bounce a signal through the woods. For those who simply need to go a long distance, these devices can be reconfigured to go several km’s line of sight. Click on the image above to see other models of the Nanostation as well as links to various resources on how they can be used for other things or deployments.

How about some software

  • UpDraft Backup – This is a WordPress blog plugin that I use to back up my entire web including the templates, plug-ins, MySQL database and all other related components. While my dedicated private server gets backed up by my service provider (Bluehost), I wanted an extra detail of protection along with a copy placed at a different place (e.g. at my AWS account). Updraft is an example of an emerging class of tools for backing up and protecting cloud based and cloud born data. For example EMC recently acquired cloud backup startup Spanning who has the ability of protecting Salesforce, Google and other cloud based data.
  • Visual ESXtop – This is a great free tool that provides a nice interface and remote access for doing ESXtop functions normally accomplished from the ESXi console.
  • Microsoft Diskspd – If you or your geek is into server storage I/O performance and benchmark that has a Windows environment and looking for something besides Iometer, have them download the Microsoft Diskspd free utility.
  • Futuremark PCmark – Speaking of server storage I/O performance, check out Futuremark PCmark which will give your computer a great workout from graphics and video to compute, storage I/O and other common tasks.
  • RV Tools – Need to know more about your VMware virtual environment, take a quick inventory or something else, then your geek should have a copy of RV Tools from Robware.
  • iVMControl – For that vgeek how wants to be able to do simple VMware tasks from an iPhone, check out iVMControl tools. Its great, I don’t use it a lot, however there are times where I don’t need to or want to use a tablet or PC to reach my VMware environment, that’s when this virtual gadget comes into play.

Livescribe Digital Pen and Paper

How about a Livescribe digital pen and paper? Sure you can use a PC, Apple or other tablet, however some things are still easier done on a traditional paper and virtual pen. I got one of these about a year ago and use it for note taking, mocking up slides for presentations and in some cases have used this for creating figures and other things. It would be easy to see and place the Livescribe and a Windows or other tablet as an either or competitive however for me, I still see where they are better together addressing different things, at least for now.

livescribe digital penlivescribe digital pen

(Left) using my Livescribe and Echo digital pen (Right) resulting exported .Png

Tip: I you noticed in the above left image (e.g. the original) the lines in the top figure, compared to the lines in the figure on the right are different. If you encounter your livescribe causing lines to run on or into each other it is because your digital pen tip is sticking. It’s easy to check by looking at the tip of your digital pen and see if the small red light is on or off, or if it stays on when you press the pen tip. If it stays on, reset the pen tip. Also when you write, make sure to lift up on the pen tip so that it releases, otherwise you will get results like those shown on the right.

livescribe digital penlivescribe digital pen
(Left) Livescribe Digital Desktop (Middle) Imported Digital Document (Right) Exported PNG

Also check out this optional application that turns a Livescribe Echo pen like mine into a digital tablet allowing you to draw on-screen with certain applications and webinar tools.

Some books for the geek

Speaking of reading, for those who are not up on the No SQL and alternative SQL based databases including Mongo, Hbase, Riak, Cassandra, MySQL, add Seven Databases in Seven Weeks to your liust. Click on the image to read my book review of it as well as links to order it from Amazon. Seven Databases in Seven Weeks (A Guide to Modern Databases and the NoSQL Movement) is a book written Eric Redmond (@coderoshi) and Jim Wilson (@hexlib), part of The Pragmatic Programmers (@pragprog) series that takes a look at several non SQL based database systems.

seven database nosql

Where to get the above items

  • Ebay for new and used
  • Amazon for new and used
  • Newegg
  • PC Pit stop
  • And many other venues

What this all means

Note: Some of the above can be found at your favorite trade show or conference so keep that in mind for future gift giving.

What interesting geek gift ideas or wish list items do you have?

Of course if you have anything interesting to mention feel free to add it to the comments (keep it clean though ;) or feel free to send to me for future mention.

In the meantime have a safe and happy holiday season for what ever holiday you enjoy celebrating anytime of the year.

Ok, nuff said, for now…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Data Storage Tape Update V2014, Its Still Alive

Data Storage Tape Update V2014, It’s Still Alive

server storage I/O trends

A year or so ago I did a piece tape summit resources. Despite being declared dead for decades, and will probably stay being declared dead for years to come, magnetic tape is in fact still alive being used by some organizations, granted its role is changing while the technology still evolves.

Here is the memo I received today from the PR folks of the Tape Storage Council (e.g. tape vendors marketing consortium) and for simplicity (mine), I’m posting it here for you to read in its entirety vs. possibly in pieces elsewhere. Note that this is basically a tape status and collection of marketing and press release talking points, however you can get an idea of the current messaging, who is using tape and technology updates.

Tape Data Storage in 2014 and looking towards 2015

True to the nature of magnetic tape as a data storage medium, this is not a low latency small post, rather a large high-capacity bulk post or perhaps all you need to know about tape for now, or until next year. Otoh, if you are a tape fan, you can certainly take the memo from the tape folks, as well as visit their site for more info.

From the tape storage council industry trade group:

Today the Tape Storage Council issued its annual memo to highlight the current trends, usages and technology innovations occurring within the tape storage industry. The Tape Storage Council includes representatives of BDT, Crossroads Systems, FUJIFILM, HP, IBM, Imation, Iron Mountain, Oracle, Overland Storage, Qualstar, Quantum, REB Storage Systems, Recall, Spectra Logic, Tandberg Data and XpresspaX.  

Data Growth and Technology Innovations Fuel Tape’s Future
Tape Addresses New Markets as Capacity, Performance, and Functionality Reach New Levels

Abstract
For the past decade, the tape industry has been re-architecting itself and the renaissance is well underway. Several new and important technologies for both LTO (Linear Tape Open) and enterprise tape products have yielded unprecedented cartridge capacity increases, much longer media life, improved bit error rates, and vastly superior economics compared to any previous tape or disk technology. This progress has enabled tape to effectively address many new data intensive market opportunities in addition to its traditional role as a backup device such as archive, Big Data, compliance, entertainment and surveillance. Clearly disk technology has been advancing, but the progress in tape has been even greater over the past 10 years. Today’s modern tape technology is nothing like the tape of the past.

The Growth in Tape  
Demand for tape is being fueled by unrelenting data growth, significant technological advancements, tape’s highly favorable economics, the growing requirements to maintain access to data “forever” emanating from regulatory, compliance or governance requirements, and the big data demand for large amounts of data to be analyzed and monetized in the future. The Digital Universe study suggests that the world’s information is doubling every two years and much of this data is most cost-effectively stored on tape.

Enterprise tape has reached an unprecedented 10 TB native capacity with data rates reaching 360 MB/sec. Enterprise tape libraries can scale beyond one exabyte. Enterprise tape manufacturers IBM and Oracle StorageTek have signaled future cartridge capacities far beyond 10 TBs with no limitations in sight.  Open systems users can now store more than 300 Blu-ray quality movies with the LTO-6 2.5 TB cartridge. In the future, an LTO-10 cartridge will hold over 14,400 Blu-ray movies. Nearly 250 million LTO tape cartridges have been shipped since the format’s inception. This equals over 100,000 PB of data protected and retained using LTO Technology. The innovative active archive solution combining tape with low-cost NAS storage and LTFS is gaining momentum for open systems users.

Recent Announcements and Milestones
Tape storage is addressing many new applications in today’s modern data centers while offering welcome relief from constant IT budget pressures. Tape is also extending its reach to the cloud as a cost-effective deep archive service. In addition, numerous analyst studies confirm the TCO for tape is much lower than disk when it comes to backup and data archiving applications. See TCO Studies section below.

  • On Sept. 16, 2013 Oracle Corp announced the StorageTek T10000D enterprise tape drive. Features of the T10000D include an 8.5 TB native capacity and data rate of 252 MB/s native. The T10000D is backward read compatible with all three previous generations of T10000 tape drives.
  • On Jan. 16, 2014 Fujifilm Recording Media USA, Inc. reported it has manufactured over 100 million LTO Ultrium data cartridges since its release of the first generation of LTO in 2000. This equates to over 53 thousand petabytes (53 exabytes) of storage and more than 41 million miles of tape, enough to wrap around the globe 1,653 times.
  • April 30, 2014, Sony Corporation independently developed a soft magnetic under layer with a smooth interface using sputter deposition, created a nano-grained magnetic layer with fine magnetic particles and uniform crystalline orientation. This layer enabled Sony to successfully demonstrate the world’s highest areal recording density for tape storage media of 148 GB/in2. This areal density would make it possible to record more than 185 TB of data per data cartridge.
  • On May 19, 2014 Fujifilm in conjunction with IBM successfully demonstrated a record areal data density of 85.9 Gb/in2 on linear magnetic particulate tape using Fujifilm’s proprietary NANOCUBIC™ and Barium Ferrite (BaFe) particle technologies. This breakthrough in recording density equates to a standard LTO cartridge capable of storing up to 154 terabytes of uncompressed data, making it 62 times greater than today’s current LTO-6 cartridge capacity and projects a long and promising future for tape growth.
  • On Sept. 9, 2014 IBM announced LTFS LE version 2.1.4 4 extending LTFS (Linear Tape File System) tape library support.
  • On Sept. 10, 2014 the LTO Program Technology Provider Companies (TPCs), HP, IBM and Quantum, announced an extended roadmap which now includes LTO generations 9 and 10. The new generation guidelines call for compressed capacities of 62.5 TB for LTO-9 and 120 TB for generation LTO-10 and include compressed transfer rates of up to 1,770 MB/second for LTO-9 and a 2,750 MB/second for LTO-10. Each new generation will include read-and-write backwards compatibility with the prior generation as well as read compatibility with cartridges from two generations prior to protect investments and ease tape conversion and implementation.
  • On Oct. 6, 2014 IBM announced the TS1150 enterprise drive. Features of the TS1150 include a native data rate of up to 360 MB/sec versus the 250 MB/sec native data rate of the predecessor TS1140 and a native cartridge capacity of 10 TB compared to 4 TB on the TS1140. LTFS support was included.
  • On Nov. 6, 2014, HP announced a new release of StoreOpen Automation that delivers a solution for using LTFS in automation environments with Windows OS, available as a free download. This version complements their already existing support for Mac and Linux versions to help simplify integration of tape libraries to archiving solutions.

Significant Technology Innovations Fuel Tape’s Future
Development and manufacturing investment in tape library, drive, media and management software has effectively addressed the constant demand for improved reliability, higher capacity, power efficiency, ease of use and the lowest cost per GB of any storage solution. Below is a summary of tape’s value proposition followed by key metrics for each:

  • Tape drive reliability has surpassed disk drive reliability
  • Tape cartridge capacity (native) growth is on an unprecedented trajectory
  • Tape has a faster device data rate than disk
  • Tape has a much longer media life than any other digital storage medium
  • Tape’s functionality and ease of use is now greatly enhanced with LTFS
  • Tape requires significantly less energy consumption than any other digital storage technology
  • Tape storage has  a much lower acquisition cost and TCO than disk

Reliability. Tape reliability levels have surpassed HDDs. Reliability levels for tape exceeds that of the most reliable disk drives by one to three orders of magnitude. The BER (Bit Error Rate – bits read per hard error) for enterprise tape is rated at 1×1019 and 1×1017 for LTO tape. This compares to 1×1016 for the most reliable enterprise Fibre Channel disk drive.

Capacity and Data Rate. LTO-6 cartridges provide 2.5 TB capacity and more than double the compressed capacity of the preceding LTO-5 drive with a 14% data rate performance boost to 160 MB/sec. Enterprise tape has reached 8.5 TB native capacity and 252 MB/sec on the Oracle StorageTek T10000D and 10 TB native capacity and 360 MB/sec on the IBM TS1150. Tape cartridge capacities are expected to grow at unprecedented rates for the foreseeable future.

Media Life. Manufacturers specifications indicate that enterprise and LTO tape media has a life span of 30 years or more while the average tape drive will be deployed 7 to 10 years before replacement. By comparison, the average disk drive is operational 3 to 5 years before replacement.

LTFS Changes Rules for Tape Access. Compared to previous proprietary solutions, LTFS is an open tape format that stores files in application-independent, self-describing fashion, enabling the simple interchange of content across multiple platforms and workflows. LTFS is also being deployed in several innovative “Tape as NAS” active archive solutions that combine the cost benefits of tape with the ease of use and fast access times of NAS. The SNIA LTFS Technical Working Group has been formed to broaden cross–industry collaboration and continued technical development of the LTFS specification.

TCOStudies. Tape’s widening cost advantage compared to other storage mediums makes it the most cost-effective technology for long-term data retention. The favorable economics (TCO, low energy consumption, reduced raised floor) and massive scalability have made tape the preferred medium for managing vast volumes of data. Several tape TCO studies are publicly available and the results consistently confirm a significant TCO advantage for tape compared to disk solutions.

According to the Brad Johns Consulting Group, a TCO study for an LTFS-based ‘Tape as NAS’ solution totaled $1.1M compared with $7.0M for a disk-based unified storage solution.  This equates to a savings of over $5.9M over a 10-year period, which is more than 84 percent less than the equivalent amount for a storage system built on a 4 TB hard disk drive unified storage system.  From a slightly different perspective, this is a TCO savings of over $2,900/TB of data. Source: Johns, B. “A New Approach to Lowering the Cost of Storing File Archive Information,”.

Another comprehensive TCO study by ESG (Enterprise Strategies Group) comparing an LTO-5 tape library system with a low-cost SATA disk system for backup using de-duplication (best case for disk) shows that disk deduplication has a 2-4x higher TCO than the tape system for backup over a 5 year period. The study revealed that disk has a TCO of 15x higher than tape for long-term data archiving.

Select Case Studies Highlight Tape and Active Archive Solutions
CyArk Is a non-profit foundation focused on the digital preservation of cultural heritage sites including places such as Mt. Rushmore, and Pompeii. CyArk predicted that their data archive would grow by 30 percent each year for the foreseeable future reaching one to two petabytes in five years. They needed a storage solution that was secure, scalable, and more cost-effective to provide the longevity required for these important historical assets. To meet this challenge CyArk implemented an active archive solution featuring LTO and LTFS technologies.

Dream Works Animation a global Computer Graphic (CG) animation studio has implemented a reliable, cost-effective and scalable active archive solution to safeguard a 2 PB portfolio of finished movies and graphics, supporting a long-term asset preservation strategy. The studio’s comprehensive, tiered and converged active archive architecture, which spans software, disk and tape, saves the company time, money and reduces risk.

LA Kings of the NHL rely extensively on digital video assets for marketing activities with team partners and for its broadcast affiliation with Fox Sports. Today, the Kings save about 200 GB of video per game for an 82 game regular season and are on pace to generate about 32-35 TB of new data per season. The King’s chose to implement Fujifilm’s Dternity NAS active archive appliance, an open LTFS based architecture. The Kings wanted an open source archiving solution which could outlast its original hardware while maintaining data integrity. Today with Dternity and LTFS, the Kings don’t have to decide what data to keep because they are able to cost-effectively save everything they might need in the future. 

McDonald’s primary challenge was to create a digital video workflow that streamlines the management and distribution of their global video assets for their video production and post-production environment. McDonald’s implemented the Spectra T200 tape library with LTO-6 providing 250 TB of McDonald’s video production storage. Nightly, incremental backup jobs store their media assets into separate disk and LTO- 6 storage pools for easy backup, tracking and fast retrieval. This system design allows McDonald’s to effectively separate and manage their assets through the use of customized automation and data service policies.

NCSA employs an Active Archive solution providing 100 percent of the nearline storage for the NCSA Blue Waters supercomputer, which is one of the world’s largest active file repositories stored on high capacity, highly reliable enterprise tape media. Using an active archive system along with enterprise tape and RAIT (Redundant Arrays of Inexpensive Tape) eliminates the need to duplicate tape data, which has led to dramatic cost savings.

Queensland Brain Institute (QBI) is a leading center for neuroscience research.  QBI’s research focuses on the cellular and molecular mechanisms that regulate brain function to help develop new treatments for neurological and mental disorders.  QBI’s storage system has to scale extensively to store, protect, and access tens of terabytes of data daily to support cutting-edge research.  QBI choose an Oracle solution consisting of Oracle’s StorageTek SL3000 modular tape libraries with StorageTek T10000 enterprise tape drives.   The Oracle solution improved QBI’s ability to grow, attract world-leading scientists and meet stringent funding conditions.

Looking Ahead to 2015 and Beyond
The role tape serves in today’s modern data centers is expanding as IT executives and cloud service providers address new applications for tape that leverage its significant operational and cost advantages. This recognition is driving investment in new tape technologies and innovations with extended roadmaps, and it is expanding tape’s profile from its historical role in data backup to one that includes long-term archiving requiring cost-effective access to enormous quantities of stored data. Given the current and future trajectory of tape technology, data intensive markets such as big data, broadcast and entertainment, archive, scientific research, oil and gas exploration, surveillance, cloud, and HPC are expected to become significant beneficiaries of tape’s continued progress. Clearly the tremendous innovation, compelling value proposition and development activities demonstrate tape technology is not sitting still; expect this promising trend to continue in 2015 and beyond. 

Visit the Tape Storage Council at tapestorage.org

What this means and summary

Like it not tape is still alive being used along with the technology evolving with new enhancements as outlined above.

Good to see the tape folks doing some marketing to get their story told and heard for those who are still interested.

Does that mean I still use tape?

Nope, I stopped using tape for local backups and archives well over a decade ago using disk to disk and disk to cloud.

Does that mean I believe that tape is dead?

Nope, I still believe that for some organizations and some usage scenarios it makes good sense, however like with most data storage related technologies, it’s not a one size or type of technology fits everything scenario value proposition.

On a related note for cloud and object storage, visit www.objectstoragecenter.com

Ok, nuff said, for now…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud Conversations: Revisiting re:Invent 2014 and other AWS updates

server storage I/O trends

This is part one of a two-part series about Amazon Web Services (AWS) re:Invent 2014 and other recent cloud updates, read part two here.

Revisiting re:Invent 2014 and other AWS updates

AWS re:Invent 2014

A few weeks ago I attended Amazon Web Service (AWS) re:Invent 2014 in Las Vegas for a few days. For those of you who have not yet attended this event, I recommend adding it to your agenda. If you have interest in compute servers, networking, storage, development tools or management of cloud (public, private, hybrid), virtualization and related topic themes, you should check out AWS re:invent.

AWS made several announcements at re:invent including many around development tools, compute and data storage services. One of those to keep an eye on is cloud based Aurora relational database service that complement existing RDS tools. Aurora is positioned as an alternative to traditional SQL based transactional databases commonly found in enterprise environments (e.g. SQL Server among others).

Some recent AWS announcements prior to re:Invent include

AWS vCenter Portal

Using the AWS Management Portal for vCenter adds a plug-in within your VMware vCenter to manage your AWS infrastructure. The vCenter for AWS plug-in includes support for AWS EC2 and Virtual Machine (VM) import to migrate your VMware VMs to AWS EC2, create VPC (Virtual Private Clouds) along with subnet’s. There is no cost for the plug-in, you simply pay for the underlying AWS resources consumed (e.g. EC2, EBS, S3). Learn more about AWS Management Portal for vCenter here, and download the OVA plug-in for vCenter here.

AWS re:invent content


AWS Andy Jassy (Image via AWS)

November 12, 2014 (Day 1) Keynote (highlight video, full keynote). This is the session where AWS SVP Andy Jassy made several announcements including Aurora relational database that complements existing RDS (Relational Data Services). In addition to Andy, the key-note sessions also included various special guests ranging from AWS customers, partners and internal people in support of the various initiatives and announcements.


Amazon.com CTO Werner Vogels (Image via AWS)

November 13, 2014 (Day 2) Keynote (highlight video, full keynote). In this session, Amazon.com CTO Werner Vogels appears making announcements about the new Container and Lambda services.

AWS re:Invent announcements

Announcements and enhancements made by AWS during re:Invent include:

  • Key Management Service (KMS)
  • Amazon RDS for Aurora
  • Amazon EC2 Container Service
  • AWS Lambda
  • Amazon EBS Enhancements
  • Application development, deployed and life-cycle management tools
  • AWS Service Catalog
  • AWS CodeDeploy
  • AWS CodeCommit
  • AWS CodePipeline

Key Management Service (KMS)

Hardware security module (HSM) based key managed service for creating and control of encryption keys to protect security of digital assets and their keys. Integration with AWS EBS and others services including S3 and Redshift along with CloudTrail logs for regulatory, compliance and management. Learn more about AWS KMS here

AWS Database

For those who are not familiar, AWS has a suite of database related services including SQL and no SQL based, simple to transactional to Petabyte (PB) scale data warehouses for big data and analytics. AWS offers the Relational Database Service (RDS) which is a suite of different database types, instances and services. RDS instance and types include SimpleDB, MySQL, Postgress, Oracle, SQL Server and the new AWS Aurora offering (read more below).  Other little data database and big data repository related offerings include DynamoDB (a non-SQL database), ElasticCache (in memory cache repository) and Redshift (large-scale data warehouse and big data repository).

In addition to database services offered by AWS, you can also combine various AWS resources including EC2 compute, EBS and other storage offerings to create your own solution. For example there are various Amazon Machine Images (AMI’s) or pre-built operating systems and database tools available with EC2 as well as via the AWS Marketplace , such as MongoDB and Couchbase among others. For those not familiar with MongoDB, Couchbase, Cassandra, Riak along with other non SQL or alternative databases and key value repositories, check out Seven Databases in Seven Weeks in my book review of it here.

Seven Databases book review
Seven Databases in Seven Weeks and NoSQL movement available from Amazon.com

Amazon RDS for Aurora

Aurora is a new relational database offering part of the AWS RDS suite of services. Positioned as an alternative to commercial high-end database, Aurora is a cost-effective database engine compatible with MySQL. AWS is claiming 5x better performance than standard MySQL with Aurora while being resilient and durable. Learn more about Aurora which will be available in early 2015 and its current preview here.

Amazon EC2 C4 instances

AWS will be adding a new C4 instance as a next generation of EC2 compute instance based on Intel Xeon E5-2666 v3 (Haswell) processors. The Intel Xeon E5-2666 v3 processors run at a clock speed of 2.9 GHz providing the highest level of EC2 performance. AWS is targeting traditional High Performance Computing (HPC) along with other compute intensive workloads including analytics, gaming, and transcoding among others. Learn more AWS EC2 instances here, and view this Server and StorageIO EC2, EBS and associated AWS primer here.

Amazon EC2 Container Service

Containers such as those via Docker have become popular to support developers rapidly build as well as deploy scalable applications. AWS has added a new feature called EC2 Container Service that supports Docker using simple API’s. In addition to supporting Docker, EC2 Container Service is a high performance scalable container management service for distributed applications deployed on a cluster of EC2 instances. Similar to other EC2 services, EC2 Container Service leverages security groups, EBS volumes and Identity Access Management (IAM) roles along with scheduling placement of containers to meet your needs. Note that AWS is not alone in adding container and docker support with Microsoft Azure also having recently made some announcements, learn more about Azure and Docker here. Learn more about EC2 container service here and more about Docker here.

Docker for smarties

Continue reading about re:Invent 2014 and other recent AWS enhancements here in part two of this two-part series.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II: Revisiting re:Invent 2014, Lambda and other AWS updates

server storage I/O trends

Part II: Revisiting re:Invent 2014 and other AWS updates

This is part two of a two-part series about Amazon Web Services (AWS) re:Invent 2014 and other recent cloud updates, read part one here.

AWS re:Invent 2014

AWS re:Invent announcements

Announcements and enhancements made by AWS during re:Invent include:

  • Key Management Service (KMS)
  • Amazon RDS for Aurora
  • Amazon EC2 Container Service
  • AWS Lambda
  • Amazon EBS Enhancements
  • Application development, deployed and life-cycle management tools
  • AWS Service Catalog
  • AWS CodeDeploy
  • AWS CodeCommit
  • AWS CodePipeline

AWS Lambda

In addition to announcing new higher performance Elastic Cloud Compute (EC2) compute instances along with container service, another new service is AWS Lambda. Lambda is a service that automatically and quickly runs your applications code in response to events, activities, or other triggers. In addition to running your code, Lambda service is billed in 100 millisecond increments along with corresponding memory use vs. standard EC2 per hour billing. What this means is that instead of paying for an hour of time for your code to run, you can choose to use the Lambda service with more fine-grained consumption billing.

Lambda service can be used to have your code functions staged ready to execute. AWS Lambda can run your code in response to S3 bucket content (e.g. objects) changes, messages arriving via Kinesis streams or table updates in databases. Some examples include responding to event such as a web-site click, response to data upload (photo, image, audio, file or other object), index, stream or analyze data, receive output from a connected device (think Internet of Things IoT or Internet of Device IoD), trigger from an in-app event among others. The basic idea with Lambda is to be able to pay for only the amount of time needed to do a particular function without having to have an AWS EC2 instance dedicated to your application. Initially Lambda supports Node.js (JavaScript) based code that runs in its own isolated environment.

AWS cloud example
Various application code deployment models

Lambda service is a pay for what you consume, charges are based on the number of requests for your code function (e.g. application), amount of memory and execution time. There is a free tier for Lambda that includes 1 million requests and 400,000 GByte seconds of time per month. A GByte second is the amount of memory (e.g. DRAM vs. storage) consumed during a second. An example is your application is run 100,000 times and runs for 1 second consuming 128MB of memory = 128,000,000MB = 128,000GB seconds. View various pricing models here on the AWS Lambda site that show examples for different memory sizes, times a function runs and run time.

How much memory you select for your application code determines how it can run in the AWS free tier, which is available to both existing and new customers. Lambda fees are based on the total across all of your functions starting with the code when it runs. Note that you could have from one to thousands or more different functions running in Lambda service. As of this time, AWS is showing Lambda pricing as free for the first 1 million requests, and beyond that, $0.20 per 1 million request ($0.0000002 per request) per duration. Duration is from when you code runs until it ends or otherwise terminates rounded up to the nearest 100ms. The Lambda price also depends on the amount of memory you allocated for your code. Once past the 400,000 GByte second per month free tier the fee is $0.00001667 for every GB second used.

Why use AWS Lambda vs. an EC2 instance

Why would you use AWS Lambda vs. provisioning an Container, EC2 instance or running your application code function on a traditional or virtual machine?

If you need control and can leverage an entire physical server with its operating system (O.S.), application and support tools for your piece of code (e.g. JavaScript), that could be an option. If you simply need to have an isolated image instance (O.S., applications and tools) for your code on a shared virtual on-premises environment then that can be an option. Likewise if you have the need to move your application to an isolated cloud machine (CM) that hosts an O.S. along with your application paying for those resources such as on an hourly basis, that could be your option. Simply need a lighter-weight container to drop your application into that’s where Docker and containers comes into play to off-load some of the traditional application dependencies overhead.

However, if all you want to do is to add some code logic to support processing activity for example when an object, file or image is uploaded to AWS S3 without having to standup an EC2 instance along with associated server, O.S. and complete application activity, that’s where AWS Lambda comes into play. Simply create your code (initially JavaScript) and specify how much memory it needs, define what events or activities will trigger or invoke the event, and you have a solution.

View AWS Lambda pricing along with free tier information here.

Amazon EBS Enhancements

AWS is increasing the performance and size of General Purpose SSD and Provisioned IOP’s SSD volumes. This means that you can create volumes up to 16TB and 10,000 IOP’s for AWS EBS general-purpose SSD volumes. For EBS Provisioned IOP’s SSD volumes you can create up to 16TB for 20,000 IOP’s. General-purpose SSD volumes deliver a maximum throughput (bandwidth) of 160 MBps and Provisioned IOP SSD volumes have been specified by AWS at 320MBps when attached to EBS optimized instances. Learn more about EBS capabilities here. Verify your IO size and verify AWS sizing information to avoid surprises as all IO sizes are not considered to be the same. Learn more about Provisioned IOP’s, optimized instances, EBS and EC2 fundamentals in this StorageIO AWS primer here.

Application development, deployed and life-cycle management tools

In addition to compute and storage resource enhancements, AWS has also announced several tools to support application development, configuration along with deployment (life-cycle management). These include tools that AWS uses themselves as part of building and maintaining the AWS platform services.

AWS Config (Preview e.g. early access prior to full release)

Management, reporting and monitoring capabilities including Data center infrastructure management (DCIM) for monitoring your AWS resources, configuration (including history), governance, change management and notifications. AWS Config enables similar capabilities to support DCIM, Change Management Database (CMDB), trouble shooting and diagnostics, auditing, resource and configuration analysis among other activities. Learn more about AWS Config here.

AWS Service Catalog

AWS announced a new service catalog that will be available in early 2015. This new service capability will enable administrators to create and manage catalogs of approved resources for users to use via their personalized portal. Learn more about AWS service catalog here.

AWS CodeDeploy

To support code rapid deployment automation for EC2 instances, AWS has released CodeDeploy. CodeDeploy masks complexity associated with deployment when adding new features to your applications while reducing human error-prone operations. As part of the announcement, AWS mentioned that they are using CodeDeploy as part of their own applications development, maintenance, and change-management and deployment operations. While suited for at scale deployments across many instances, CodeDeploy works with as small as a single EC2 instance. Learn more about AWS CodeDeploy here.

AWS CodeCommit

For application code management, AWS will be making available in early 2015 a new service called CodeCommit. CodeCommit is a highly scalable secure source control service that host private Git repositories. Supporting standard functionalities of Git, including collaboration, you can store things from source code to binaries while working with your existing tools. Learn more about AWS CodeCommit here.

AWS CodePipeline

To support application delivery and release automation along with associated management tools, AWS is making available CodePipeline. CodePipeline is a tool (service) that supports build, checking workflow’s, code staging, testing and release to production including support for 3rd party tool integration. CodePipeline will be available in early 2015, learn more here.

Additional reading and related items

Learn more about the above and other AWS services by actually truing hands on using their free tier (AWS Free Tier). View AWS re:Invent produced breakout session videos here, audio podcasts here, and session slides here (all sessions may not yet be uploaded by AWS re:Invent)

What this all means

AWS amazon web services

AWS continues to invest as well as re-invest into its environment both adding new feature functionality, as well as expanding the extensibility of those features. This means that AWS like other vendors or service providers adds new check-box features, however they also like some increase the depth extensibility of those capabilities. Besides adding new features and increasing the extensibility of existing capabilities, AWS is addressing both the data and information infrastructure including compute (server), storage and database, networking along with associated management tools while also adding extra developer tools. Developer tools include life-cycle management supporting code creation, testing, tracking, testing, change management among other management activities.

Another observation is that while AWS continues to promote the public cloud such as those services they offer as the present and future, they are also talking hybrid cloud. Granted you have to listen carefully as you may not simply hear hybrid cloud used like some toss it around, however listen for and look into AWS Virtual Private Cloud (VPC), along with what you can do using various technologies via the AWS marketplace. AWS is also speaking the language of enterprise and traditional IT from an applications and development to data and information infrastructure perspective while also walking the cloud talk. What this means is that AWS realizes that they need to help existing environments evolve and make the transition to the cloud which means speaking their language vs. converting them to cloud conversations to then be able to migrate them to the cloud. These steps should make AWS practical for many enterprise environments looking to make the transition to public and hybrid cloud at their pace, some faster than others. More on these and some related themes in future posts.

The AWS re:Invent event continues to grow year over year, I heard a figure of over 12,000 people however it was not clear if that included exhibiting vendors, AWS people, attendees, analyst, bloggers and media among others. However a simple validation is that the keynotes were in the larger rooms used by events such as EMCworld and VMworld when they hosted in Las Vegas as was the expo space vs. what I saw last year while at re:Invent. Unlike some large events such as VMworld where at best there is a waiting queue or line to get into sessions or hands on lab (HOL), while becoming more crowded, AWS re:Invent is still easy to get in and spend some time using the HOL which is of course powered by AWS meaning you can resume what you started while at re:Invent later. Overall a good event and nice series of enhancements by AWS, looking forward to next years AWS re:Invent.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

November 2014 Server StorageIO Update Newsletter

November 2014

Hello and welcome to this November Server and StorageIO update newsletter. Enjoy this edition of the Server and StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.

Cheers gs

Industry Trends and Perspectives

Storage trends

A few weeks ago I attended AWS re:invent 2014 in Las Vegas for a few days. For those of you who have not yet attended this event, I recommend adding it to your agenda. If you have interest in compute servers, networking, storage, development tools or management of cloud (public, private, hybrid), virtualization and related topic themes, you should check out AWS re:invent. For those who need a AWS primer or refresher visit here.

AWS made several announcements at re:invent including many around development tools, compute and data storage services. One of those to keep an eye on is cloud based Aurora relational database service that complement existing RDS tools. Aurora is positioned as an alternative to traditional SQL based transactional databases commonly found in enterprise environments (e.g. SQL Server, IBM DB2/UDB, Oracle among others). I will put some additional notes and perspectives together in a StorageIOblog post along with some video from AWS soon.

Commentary In The News

StorageIO news

Following are some StorageIO industry trends perspectives comments that have appeared in various venues. Cloud conversations continue to be popular including concerns about privacy, security and availability.

Over at Processor: Comments on Datacenters, Decide Whether To Build Or Not To Build, and controlling storage costs via insight and action. EdTechMagazine: has some comments on IaaS and Is Lean IT Here to Stay, while at CyberTrend perspectives on Better Servers for Better Business.

Across the pond over at the UK based Computerweekly comments on AWS launching Aurora cloud-based relational database engine, and hybrid cloud storage. Some comments on Overland Storage RAINcloud can be found at SearchStorage, while SearchDatabackup has some comments on Symantec break-up makeing sense for storage.

For those of you who speak Dutch, here is an interview (via it-infra.nl) I did when Holland earlier this year about storage and your business.

View other industry trends comments here

Tips and Articles

View recent as well as past tips and articles here

StorageIOblog posts

Recent StorageIOblog posts include:

View other recent as well as past blog posts here

In This Issue

  • Industry Trends Perspectives
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events & Activities

    November 11-13, 2014
    AWS re:Invent Las Vegas

    View other recent and upcoming events here

    Webinars

    December 11, 2014 – BrightTalk
    Server & Storage I/O Performance

    December 10, 2014 – BrightTalk
    Server & Storage I/O Decision Making

    December 9, 2014 – BrightTalk
    Virtual Server and Storage Decision Making

    December 3, 2014 – BrightTalk
    Data Protection Modernization

    November 13 9AM PT – BrightTalk
    Software Defined Storage

    November 11 10AM PT
    Google+ Hangout Dell BackupU

    November 11 9AM PT – BrightTalk
    Software Defined Data Centers

    Videos and Podcasts

    VMworld 2014 review
    Video: Click to view VMworld 2014 update

    StorageIO podcasts are also available via and at StorageIO.tv

    From StorageIO Labs

    Research, Reviews and Reports

    Lenovo ThinkServer TD340
    Earlier this year I did a review of the Lenovo ThinkServer TS140 in the StorageIO Labs (see the review here), in fact I ended up buying a TS140 after the review, and a few months back picked up yet another one. This StorageIOlab review looks at the Lenovo ThinkServer TD340 Tower Server which besides having a larger model number than the TS140, it also has a lot more capabilities (server compute, memory, I/O slots and internal hot-swap storage bays. Read more about the TD340 here.

    Resources and Links

    Check out these useful links and pages:
    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageio.com/ssd
    storageio.com/ssd

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    StorageIO Out and About Update – VMworld 2014

    StorageIO Out and About Update – VMworld 2014

    Here is a quick video montage or mash-up if you prefer that Cory Peden (aka the Server and StorageIO Intern @Studentof_IT) put together using some video that recorded while at VMworld 2014 in San Francisco. In this YouTube video we take a quick tour around the expo hall to see who as well as what we run into while out and about.

    VMworld 2014 StorageIO Update
    Click on above image to view video

    For those of you who were at VMworld 2014 the video (click above image) will give you a quick Dejavu memory of the sites and sounds while for those who were not there, see what you missed to plan for next year. Watch for appearances from Gina Minks (@Gminks) aka Gina Rosenthal (of BackupU)and Michael (not Dell) of Dell Data Protection, Luigi Danakos (@Nerdblurt) of HP Data Protection who lost his voice (tweet Luigi if you can help him find his voice). With Luigi we were able to get in a quick game of buzzword bingo before catching up with Marc Farley (@Gofarley) and John Howarth of Quaddra Software. Mark and John talk about their new solution from Quaddra which will enable searching and discovering data across different storage systems and technologies.  

    Other visits include a quick look at an EVO:Rail from Dell, along with Docker for Smarties overview with Nathan LeClaire (@upthecyberpunks) of Docker (click here to watch the extended interview with Nathan).

    Docker for smarties

    Check out the conversation with Max Kolomyeytsev of StarWind Software (@starwindsan) before we get interrupted by a sales person. During our walk about, we also bump into Mark Peters (@englishmdp) of ESG facing off video camera to video camera.

    Watch for other things including rack cabinets that look like compute servers yet that have a large video screen so they can be software defined for different demo purposes.

    virtual software defined server

    Watch for more Server and StorageIO Industry Trend Perspective podcasts, videos as well as out and about updates soon, meanwhile check out others here.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    This is the first post of a two part series, read the second post here.

    Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD’s as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

    The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future, Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way. For example nand flash SSD as part of an enterprise tiered storage strategy can be implemented server-side using PCIe cards, SAS and SATA drives as targets or as cache along with software, as well as leveraging SSD devices in storage systems or appliances.

    Seagate 1200 SSD
    Seagate 1200 Enterprise SAS 12Gbs SSD Image via Seagate.com

    Another place where nand flash can be found and compliments SSD devices are so-called Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD) including a new generation that accelerate writes as well as reads such as those Seagate refers to as with Enterprise TurboBoost. The Enterprise TurboBoost drives (view the companion StorageIO Lab review TurboBoost white paper here) were previously known as the Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD). Read more about TurboBoost here and here.

    The best server and storage I/O is the one you do not have to do

    Keep in mind that the best server or storage I/O is that one that you do not have to do, with the second best being the one with the least overhead resolved as close to the processor (compute) as possible or practical. The following figure shows that the best place to resolve server and storage I/O is as close to the compute processor as possible however only a finite amount of storage memory located there. This is where the server memory and storage I/O hierarchy comes into play which is also often thought of in the context of tiered storage balancing performance and availability with cost and architectural limits.

    Also shown is locality of reference which refers to how close data is to where it is being used and includes cache effectiveness or buffering. Hence a small amount of cache of flash and DRAM in the right location can have a large benefit. Now if you can afford it, install as much DRAM along with flash storage as possible, however if you are like most organizations with finite budgets yet server and storage I/O challenges, then deploy a tiered flash storage strategy.

    flash cache locality of reference
    Server memory storage I/O hierarchy, locality of reference

    Seagate 1200 12Gbs Enterprise SAS SSD’s

    Back to the Seagate 1200 12Gbs Enterprise SAS SSD which is covered in this StorageIO Industry Trends Perspective thought leadership white paper. The focus of the white paper is to look at how the Seagate 1200 Enterprise class SSD’s and 12Gbps SAS address current and next generation tiered storage for virtual, cloud, traditional Little and Big Data infrastructure environments.

    Seagate 1200 Enteprise SSD

    This includes providing proof points running various workloads including Database TPC-B, TPC-E and Microsoft Exchange in the StorageIO Labs along with cache software comparing SSD, SSHD and different HDD’s including 12Gbs SAS 6TB near-line high-capacity drives.

    Seagate 1200 Enterprise SSD Proof Points

    The proof points in this white paper are from an applications focus perspective representing more of an end-to-end real-world situation. While they are not included in this white paper, StorageIO has run traditional storage building-block focus workloads, which can be found at StorageIOblog (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?). These include tools such as Iometer, iorate, vdbench among others for various IO sizes, mixed, random, sequential, reads, writes along with “hot-band" across different number of threads (concurrent users). “Hot-Band” is part of the SNIA Emerald energy effectiveness metrics for looking at sustained storage performance using tools such as vdbench. Read more about other various server and storage I/O benchmarking tools and techniques here.

    For the following series of proof-points (TPC-B, TPC-E and Exchange) a system under test (SUT) consisted of a physical server (described with the proof-points) configured with VMware ESXi along with guests virtual machines (VMs) configured to do the storage I/O workload. Other servers were used in the case of TPC workloads as application transactional requester to drive the SQL Server database and resulting server storage I/O workload. VMware was used in the proof-points to reflect a common industry trend of using virtual server infrastructures (VSI) supporting applications including database, email among others. For the proof-point scenarios, the SUT along with storage system device under test were dedicated to that scenario (e.g. no other workload running) unless otherwise noted.

    Server Storage I/O config
    Server Storage I/O configuration for proof-points

    Microsoft Exchange Email proof-point configuration

    For this proof-point, Microsoft Jet Stress Exchange performance workloads were placed (e.g. Exchange Database – EDB file) on each of the different devices under test with various metrics shown including activity rates and response time for reads as well as writes. For the Exchange testing, the EDB was placed on the device being tested while its log files were placed on a separate Seagate 400GB Enterprise 12Gbps SAS SSD.

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB 7.2K SATA HDD. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft SBS2011 Service Pack 1 64 bit. Guest VM (VMware vSphere 5.5) was on a SSD based dat, had a physical machine (host), with 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot with Jet Stress 2010.  All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device also persistent (no delayed writes). EDB was 300GB and workload ran for 8 hours.

    Microsoft Exchange VMware SSD performance
    Microsoft Exchange proof-points comparing various storage devices

    TPC-B (Database, Data Warehouse, Batch updates) proof-point configuration

    SSD’s are a good fit for both transaction database activity with reads and write as well as query-based decision support systems (DSS), data warehouse and big data analytics. The following are proof points of SSD capabilities for database activity. In addition to supporting database table files and objects, along with transaction journal logs, other uses include for meta-data, import/export or other high-IO and write intensive scenarios. Two database workload profiles were tested including batch update (write-intensive) and transactional. Activity involved running Transaction Performance Council (TPC) workloads TPC-B (batch update) and TPC-E (transaction/OLTP simulate financial trading system) against Microsoft SQL Server 2012 databases. Each test simulation had the SQL Server database (MDF) on a different device with transaction log file (LDF) on a separate SSD. TPC-B for a single device results shown below.

    TPC-B (write intensive) results below show how TPS work being done (blue) increases from left to right (more is better) for various numbers of simulated users. Also shown on the same line for each amount of TPS work being done is the average latency in seconds (right to left) where lower is better. Results are shown from top to bottom for each group of users (100, 50, 20 and 1) for the different drives being tested (top to bottom). Note how the SSD device does more work at a lower response time vs. traditional HDD’s

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

    VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

    TPC-B sql server database SSD performance
    TPC-B SQL Server database proof-points comparing various storage devices

    TPC-E (Database, Financial Trading) proof-point configuration

    The following shows results from TPC-E test (OLTP/transactional workload) simulating a financial trading system. TPC-E is an industry standard workload that performs a mix of reads and writes database queries. Proof-points were performed with various numbers of users from 10, 20, 50 and 100 to determine (TPS) Transaction per Second (aka I/O rate) and response time in seconds. The TPC-E transactional results are shown for each device being tested across different user workloads. The results show how TPC-E TPS work (blue) increases from left to right (more is better) for larger numbers of users along with corresponding latency (green) that goes from right to left (less is better). The Seagate Enterprise 1200 SSD is shown on the top in the figure below with a red box around its results. Note how the SSD as a lower latency while doing more work compared to the other traditional HDD’s

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

    VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

    TPC-E sql server database SSD performance
    TPC-E (Financial trading) SQL Server database proof-points comparing various storage devices

    Continue reading part-two of this two-part series here including the virtual server storage I/O blender effect and solution.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    This is the second post of a two part series, read the first post here.

    Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD’s as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

    The Server Storage I/O Blender Effect Bottleneck

    The earlier proof-points focused on SSD as a target or storage device. In the following proof-points, the Seagate Enterprise 1200 SSD is used as a shared read cache (write-through). Using a write-through cache enables a given amount of SSD to give a performance benefit to other local and networked storage devices.

    traditional server storage I/O
    Non-virtualized servers with dedicated storage and I/O paths.

    Aggregation causes aggravation with I/O bottlenecks because of consolidation using server virtualization. The following figure shows non-virtualized servers with their own dedicated physical machine (PM) and I/O resources. When various servers are virtualized and hosted by a common host (physical machine), their various workloads compete for I/O and other resources. In addition to competing for I/O performance resources, these different servers also tend to have diverse workloads.

    virtual server storage I/O blender
    Virtual server storage I/O blender bottleneck (aggregation causes aggravation)

    The figure above shows aggregation causing aggravation with the result being I/O bottlenecks as various applications performance needs converge and compete with each other. The aggregation and consolidation result is a blend of random, sequential, large, small, read and write characteristics. These different storage I/O characteristics are mixed up and need to be handled by the underlying I/O capabilities of the physical machine and hypervisor. As a result, a common deployment for SSD in addition to as a target device for storing data is as a cache to cut bottlenecks for traditional spinning HDD.

    In the following figure a solution is shown introducing I/O caching with SSD to help mitigate or cut the effects of server consolation causing performance aggravations.

    Creating a server storage I/O blender bottleneck

    xxxxx
    Addressing the VMware Server Storage I/O blender with cache

    Addressing server storage I/O blender and other bottlenecks

    For these proof-points, the goal was to create an I/O bottleneck resulting from multiple VMs in a virtual server environment performing application work. In this proof-point, multiple competing VMs including a SQL Server 2012 database and an Exchange server shared the same underlying storage I/O infrastructure including HDD’s The 6TB (Enterprise Capacity) HDD was configured as a VMware dat and allocated as virtual disks to the VMs. Workloads were then run concurrently to create an I/O bottleneck for both cached and non-cached results.

    xxxxx
    Server storage I/O with virtualization roof-point configuration topology

    The following figure shows two sets of proof points, cached (top) and non-cached (bottom) with three workloads. The workloads consisted of concurrent Exchange and SQL Server 2012 (TPC-B and TPC-E) running on separate virtual machine (VM) all on the same physical machine host (SUT) with database transactions being driven by two separate servers. In these proof-points, the applications data were placed onto the 6TB SAS HDD to create a bottleneck, and a portion of the SSD used as a cache. Note that the Virtunet cache software allows you to use a part of a SSD device for cache with the balance used as a regular storage target should you want to do so.

    If you have paid attention to the earlier proof-points, you might notice that some of the results below are not as good as those seen in the Exchange, TPC-B and TPC-E results about. The reason is simply that the earlier proof-points were run without competing workloads, and database along with log or journal files were placed on separate drives for performance. In the following proof-point as part of creating a server storage I/O blender bottleneck the Exchange, TPC-B as well as TPC-E workloads were all running concurrently with all data on the 6TB drive (something you normally would not want to do).

    storage I/O blender solved
    Solving the VMware Server Storage I/O blender with cache

    The cache and non-cached mixed workloads shown above prove how an SSD based read-cache can help to reduce I/O bottlenecks. This is an example of addressing the aggravation caused by aggregation of different competing workloads that are consolidated with server virtualization.

    For the workloads shown above, all data (database tables and logs) were placed on VMware virtual disks created from a dat using a single 7.2K 6TB 12Gbps SAS HDD (e.g. Seagate Enterprise Capacity).

    The guest VM system disks which included paging, applications and other data files were virtual disks using a separate dat mapped to a single 7.2K 1TB HDD. Each workload ran for eight hours with the TPC-B and TPC-E having 50 simulated users. For the TPC-B and TPC-E workloads, two separate servers were used to drive the transaction requests to the SQL Server 2012 database.

    For the cached tests, a Seagate Enterprise 1200 400GB 12Gbps SAS SSD was used as the backing store for the cache software (Virtunet Systems Virtucache) that was installed and configured on the VMware host.

    During the cached tests, the physical HDD for the data files (e.g. 6TB HDD) and system volumes (1TB HDD) were read cache enabled. All caching was disabled for the non-cached workloads.

    Note that this was only a read cache, which has the side benefit of off-loading those activities enabling the HDD to focus on writes, or read-ahead. Also note that the combined TPC-E, TPC-B and Exchange databases, logs and associated files represented over 600GB of data, there was also the combined space and thus cache impact of the two system volumes and their data. This simple workload and configuration is representative of how SSD caching can complement high-capacity HDD’s

    Seagate 6TB 12Gbs SAS high-capacity HDD

    While the star and focus of these series of proof-points is the Seagate 1200 Enterprise 12Gbs SAS SSD, the caching software (virtunet) and Enterprise TurboBoost drives also play key supporting and favorable roles. However the 6TB 12Gbs SAS high-capacity drive caught my attention from a couple of different perspectives. Certainly the space capacity was interesting along with a 12Gbs SAS interface well suited for near-line, high-capacity and dense tiered storage environments. However for a high-capacity drive its performance is what really caught my attention both in the standard exchange, TPC-B and TPC-E workloads, as well as when combined with SSD and cache software.

    This opens the door for a great combination of leveraging some amount of high-performance flash-based SSD (or TurboBoost drives) combined with cache software and high-capacity drives such as the 6TB device (Seagate now has larger versions available). Something else to mention is that the 6TB HDD in addition to being available in either 12Gbs SAS, 6Gbs SAS or 6Gbs SATA also has enhanced durability with a Read Bit Error Rate of 10 ^15 (e.g. 1 second read error per 10^15 average attempts) and an AFR (annual failure rate) of 0.63% (See more speeds and feeds here). Hence if you are concerned about using large capacity HDD’s and them failing, make sure you go with those that have a high Read Bit Error Rate and a low AFR which are more common with enterprise class vs. lower cost commodity or workstation drives. Note that these high-capacity enterprise HDD’s are also available with Self-Encrypting Drive (SED) options.

    Summary

    Read more in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of Seagate 1200 12Gbs SAS SSD’s and visit the Seagate Enterprise 1200 12Gbs SAS SSD page here. Moving forward there is the notion that flash SSD will be everywhere. There is a difference between all data on flash SSD vs. having some amount of SSD involved in preserving, serving and protecting (storing) information.

    Key themes to keep in mind include:

    • Aggregation can cause aggravation which SSD can alleviate
    • A relative small amount of flash SSD in the right place can go a long way
    • Fast flash storage needs fast server storage I/O access hardware and software
    • Locality of reference with data close to applications is a performance enabler
    • Flash SSD everywhere does not mean everything has to be SSD based
    • Having some amount of flash in different places is important for flash everywhere
    • Different applications have various performance characteristics
    • SSD as a storage device or persistent cache can speed up IOPs and bandwidth

    Flash and SSD are in your future, this comes back to the questions of how much flash SSD do you need, along with where to put it, how to use it and when.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Is Computer Data Storage Complex? It Depends

    Is Computer Data Storage Complex? It Depends

    I often get asked, or, told that computer data storage is complex with so many options to choose from, apples to oranges comparison among other things.

    On a recent trip to Europe while being interviewed by a Dutch journalist in Nijkerk Holland at a Brouwer Storage Consultancy event I was presenting at, the question came up again about storage complexity. Btw, you can read the article on data storage industry trends here (its in dutch).

    I hesitated and thought for a moment and responded that in some ways it’s not as complex as some make it seem, although there is more to data storage than just cost per capacity. As I usually do when asked or told how complex data storage is my response is a mixed yes it (storage, data and information infrastructure) are complex, however lets put it in perspective which is storage any more complex than other things?

    Our conversation then evolved with an example that I find shopping for an automobile complex unless I know exactly what I’m looking for. After all there are cars trucks SUV’s used new buy lease different manufacturers makes and models speeds cargo capacity management tools and interfaces not to mention metrics and fuel.

    This is where I usually mention how IMHO buying a new car or vehicle is with all the different options, that is unless you know what you want, or know your selection criteria and options. Same with selecting a new laptop computer, tablet or smart phone, not to mention a long list of other things that to the outsiders can also seem complex, intimidating or overwhelming. However lets take a step back to look at storage then return to compare some other things that may be confusing to those who are not focused on them.

    Stepping back looking at storage

    Similar to other technologies, there are different types of data storage to meet various needs from performance to space capacity as well as support various forms of scaling.

    server and storage I/O flow
    Server and storage I/O fundamentals

    Storage options
    Various types of storage devices including HDD’s, SSHD/HHDD’s and SSD’s

    Storage type options
    Various types of storage devices

    Storage I/O decision making
    Storage options, block, file, object, ssd, hdd, primary, secondary, local and cloud

    Shopping for other things can be complex

    During my return trip to the US from the Dutch event, I had a layover at London Heathrow (LHR) and walking the concourse it occurred to me that while there are complexities involved with different technologies including storage, data and information infrastructures, there were other complexities.

    Same thing with shoes so any differ options not to mention cell phones or laptops and tablets, PCIe, or how about tv’s?

    I wan to go on a trip do I book based on lowest cost for air fare then hotel and car rental, or do I purchase a package? For the air fare is it the cheapest yet that takes all day to get from point a to b via plane changes at points c d and e not to mention paying extra fees vs paying a higher price for a direct flight with extra amenities?

    Getting hungry so what to do for dinner, what type of cuisine or food?

    Hand Baggage options
    How about a new handbag or perhaps shoes?

    Baggage options
    How about a new backpack, brief case or luggage?

    Beverage options
    What to drink for a beverage, so many options unless you know what you want.

    PDA options
    Complexity of choosing what cell phone, PDA or other electronics

    What to read options
    How about what to read including print vs. online accessible content?

    How about auto parts complexity

    Once I got home from my European trip I had some mechanical things to tend to including replacing some spark plugs.

    Auto part options
    How about automobile parts from tires, to windshield wiper blades to spark plugs?

    Sure if you know the exact part number and assuming that part number has not changed, then you can start shopping for the part. However recently I had a part number based on a vehicle serial number (e.g. make, model, year, etc) only to receive the wrong part. Sure the part numbers were correct, however along the line somewhere the manufacture made a change and not all downstream vendors knew about the part change, granted I eventually received the correct part.

    Back to tech and data infrastructures

    Ok, hopefully you got the point from the above examples among many others in that we live in world full of options and those options can bring complexity.

    What type of network or server? How about operating system, browser, database, programming or development language as there are different needs and options?

    Sure there are many storage options as not everything is the same.

    Likewise while there can be simple answer with a trend of what to use before the question is understood (perhaps due to a preference) or explained, the best or applicable answer may be it depends. However saying it depends may seem complex to those who just want a simple answer.

    Closing Comments

    So is storage more complex than other technologies, tools, products or services?

    What say you?

    Ok, nuff said, for now…

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    What does server storage I/O scaling mean to you?

    What does server storage I/O scaling mean to you?

    Scaling means different things to various people depending on the context or what it is referring to.

    For example, scaling can me having or doing more of something, or less as well as referring to how more, or less of something is implemented.

    Scaling occurs in a couple of different dimensions and ways:

    • Application workload attributes – Performance, Availability, Capacity, Economics (PACE)
    • Stability without compromise or increased complexity
    • Dimension and direction – Scaling-up (vertical), scaling-out (horizontal), scaling-down

    Scaling PACE – Performance Availability Capacity Economics

    Often I hear people talk about scaling only in the context of space capacity. However there are aspects including performance, availability as well as scaling-up or scaling-out. Scaling from application workloads perspectives include four main group themes which are performance, availability, capacity and economics (as well as energy).

    • Performance – Transactions, IOP’s, bandwidth, response time, errors, quality of service
    • Availability – Accessibility, durability, reliability, HA, BC, DR, Backup/Restore, BR, data protection, security
    • Capacity – Space to store information or place for workload to run on a server, connectivity ports for networks
    • Economics – Capital and operating expenses, buy, rent, lease, subscription

    Scaling with Stability

    The latter of the above items should be thought of more in terms of a by-product, result or goal for implementing scaling. Scaling should not result in a compromise of some other attribute such as increasing performance and loss of capacity or increased complexity. Scaling with stability also means that as you scale in some direction, or across some attribute (e.g. PACE), there should not be a corresponding increase in complexity of management, or loss of performance and availability. To use a popular buzz-term scaling with stability means performance, availability, capacity, economics should scale linear with their capabilities or perhaps cost less.

    Scaling directions: Scaling-up, scaling-down, scaling-out

    server and storage i/o scale options

    Some examples of scaling in different directions include:

    • Scaling-up (vertical scaling with bigger or faster)
    • Scaling-down (vertical scaling with less)
    • Scaling-out (horizontal scaling with more of what being scaled)
    • Scaling-up and out (combines vertical and horizontal)

    Of course you can combine the above in various combinations such as the example of scaling up and out, as well as apply different names and nomenclature to see your needs or preferences. The following are a closer look at the above with some simple examples.

    server and storage i/o scale up
    Example of scaling up (vertically)

    server and storage i/o scale down
    Example of scaling-down (e.g. for smaller scenarios)

    server and storage i/o scale out
    Example of scaling-out (horizontally)

    server and storage i/o scale out
    Example of scaling-out and up(horizontally and vertical)

    Summary and what this means

    There are many aspects to scaling, as well as side-effects or impacts as a result of scaling.

    Scaling can refer to different workload attributes as well as how to support those applications.

    Regardless of what you view scaling as meaning, keep in mind the context of where and when it is used and that others might have another scale view of scale.

    Ok, nuff said (for now)…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Lenovo ThinkServer TD340 StorageIO lab Review

    Storage I/O trends

    Lenovo ThinkServer TD340 Server and StorageIO lab Review

    Earlier this year I did a review of the Lenovo ThinkServer TS140 in the StorageIO Labs (see the review here), in fact I ended up buying a TS140 after the review, and a few months back picked up another one. This StorageIOlab review looks at the Lenovo ThinkServer TD340 Tower Server which besides having a larger model number than the TS140, it also has a lot more capabilities (server compute, memory, I/O slots and internal hot-swap storage bays. Pricing varies depending on specific configuration options, however at the time of this post Lenovo was advertising a starting price of $1,509 USD for a specific configuration here. You will need to select different options to decide your specific cost.

    The TD340 is one of the servers that Lenovo has had prior to its acquisition of IBM x86 server business that you can read about here. Note that the Lenovo acquisition of the IBM xSeries business group has begun as of early October 2014 and is expected to be completed across different countries in early 2015. Read more about the IBM xSeries business unit here, here and here.

    The Lenovo TD340 Experience

    Lets start with the overall experience which was very easy other than deciding what make and model to try. This includes going from first answering some questions to get the process moving, agreeing to keep the equipment safe, secure, insured as well as not damaging anything. Part of the process also involved answering some configuration related questions and shortly there after a large box from Lenovo arrived.

    TD340 is ready for use
    TD340 with Keyboard and Mouse (Monitor and keyboard not included)

    One of the reasons I have a photo of the TD340 on a desk is that I initially put it in an office environment similar to what I did with the TS140 as Lenovo claimed it would be quiet enough to do so. I was not surprised and indeed the TD340 is quiet enough to be used where you would normally find a workstation or mini-tower. By being so quiet the TD340 is a good fit for environments that need a server that has to go into an office environment as opposed to a server or networking room.

    Welcome to the TD340
    Lenovo ThinkServer Setup

    TD340 Setup
    Lenovo TD340 as tested in BIOS setup, note the dual Intel Xeon E5-2420 v2 processors

    TD340 as tested

    TD340 Selfie of whats inside
    TD340 "Selfie" with 4 x 8GB DDR3 DIMM (32GB) and PCIe slots (empty)

    TD340 disk drive bays
    TD340 internal drive hot-swap bays

    Speeds and Feeds

    The TD340 that I tested was a Machine type 7087 model 002RUX which included 4 x 16GB DIMMs and both processor sockets occupied.

    You can view the Lenovo TD340 data sheet with more speeds and feeds here, however the following is a summary.

    • Operating systems support include various Windows Servers (2008-2012 R2), SUSE, RHEL, Citrix XenServer and VMware ESXi
    • Form factor is 5U tower with weight starting at 62 pounds depending on how configured
    • Processors include support for up to two (2) Intel E5-2400 v2 series
    • Memory includes 12 DDR3 DRAM DIMM slots (LV RDIMM and UDIMM) for up to 129GB.
    • Expansion slots vary depending on if a single or dual cpu socket. Single CPU socket installed has 1 x PCIe Gen3 FH/HL x8 mechanical, x4 electrical, 1 x PCIe Gen3
    • FH/HL x16 mechanical, x16 electrical and a single PCI 32bit/33 MHz FH/HL slot. With two CPU sockets installed extra PCIe slots are enabled. These include single x PCIe GEN3: FH/HL x8 mechanical, x4 electrical, single x PCIe GEN3: FH/HL x16 mechanical, x16 electrical, three x PCIe GEN3: FH/HL x8 mechanical, x8 electrical and a single PCI 5V 32-bit/33 MHz: FH/HL
    • Two 5.25” media bays for CD or DVDs or other devices
    • Integrated ThinkServer RAID (0/1/10/5) with optional RAID adapter models
    • Internal storage varies depending on model including up to eight (8) x 3.5” hot swap drives or 16 x 2.5” hot swap drives (HDD’s or SSDs).
    • Storage space capacity varies by the type and size of the drives being used.
    • Networking interfaces include two (2) x GbE
    • Power supply options include single 625 watt or 800 watt, or 1+1 redundant hot-swap 800 watt, five fixed fans.
    • Management tools include ThinkServer Management Module and diagnostics

    What Did I do with the TD340

    After initial check out in an office type environment, I moved the TD340 into the lab area where it joined other servers to be used for various things.

    Some of those activities included using the Windows Server 2012 Essentials along with associated admin activities as well as installing VMware ESXi 5.5.

    TD340 is ready for use
    TD340 with Keyboard and Mouse (Monitor and keyboard not included)

    What I liked

    Unbelievably quiet which may not seem like a big deal, however if you are looking to deploy a server or system into a small office workspace, this becomes an important considerations. Otoh, if you are a power user and want a robust server that can be installed into a home media entertainment system, well, this might be a nice to have consideration ;). Speaking of IO slots, naturally I’m interested in Server Storage I/O so having multiple slots is a must have, along with the processor that is multi-core (pretty much standard these days) along with VT and EP for supporting VMware (these were disabled in the BIOS however that was an easy fix).

    What I did not like

    The only thing I did not like was that I ran into a compatibility issue trying to use a LSI 9300 series 12Gb SAS HBA which Lenovo is aware of, and perhaps has even addressed by now. What I ran into is that the adapters work however I was not able to get the full performance out of the adapters as compared to on other systems including my slower Lenovo TS140s.

    Summary

    Overall I give Lenovo and the TD340 an "B+" which would have been an "A" had I not gotten myself into a BIOS situation or been able to run the 12Gbps SAS PCIe Gen 3 cards at full speed. Likewise the Lenovo service and support also helped to improve on the experience. Otoh, if you are simply going to use the TD340 in a normal out of the box mode without customizing to add your own adapters or install your own operating system or Hypervisors (beyond those that are supplied as part of the install setup tool kit), you may have an "A" or "A+" experience with the TD340.

    Would I recommend the TD340 to others? Yes for those who need this type and class of server for Windows, *nix, Hyper-V or VMware environments.

    Would I buy a TD340 for myself? Maybe if that is the size and type of system I need, however I have my eye on something bigger. On the other hand for those who need a good value server for a SMB or ROBO environment with room to grow, the TD340 should be on your shopping list to compare with other solutions.

    Disclosure: Thanks to the folks at Lenovo for sending and making the TD340 available for review and a hands on test experience including covering the cost of shipping both ways (the unit should now be back in your possession). Thus this is not a sponsored post as Lenovo is not paying for this (they did loan the server and covered two-way shipping), nor am I paying them, however I have bought some of their servers in the past for the StorageIOLab environment that are companions to some Dell and HP servers that I have also purchased.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    September October 2014 Server and StorageIO Update Newsletter

    September and October 2014

    Hello and welcome to this joint September and October Server and StorageIO update newsletter. Since the August newsletter, things have been busy with a mix of behind the scenes projects, as well as other activities including several webinars, on-line along with in-person events in the US as well as Europe.

    Enjoy this edition of the Server and StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.

    Cheers gs

    Industry Trends and Perspectives

    Storage trends

    In September I was invited to do a key-note opening presentation at the MSP area CMG event. Theme for the September CMG event was "Flash – A Real Life Experience" with a focus of what people are doing, how testing and evaluating including use of hybrid solutions as opposed to vendor marketing sessions. My session was titled "Flash back to reality – Myths and Realities, Flash and SSD Industry trends perspectives plus benchmarking tips and can be found here. Thanks to Tom Becchetti an the MSP CMG (@mspcmg) folks for a great event.

    There are many facets to hybrid storage including different types of media (SSD and HDD’s) along with unified or multi-protocol access. Then there are hybrid storage that spans local and public clouds. Here is a link to an on-line Internet Radio show via Information Week along with on-line chat about Hybrid Storage for Government.

    Some things I’m working with or keeping an eye on include Cloud, Converged solutions, Data Protection, Business Resiliency, DCIM, Docker, InfiniBand, Microsoft (Hyper-V, SOFS, SMB 3.0), Object Storage, SSD, SDS, VMware and VVOL among others items.

    Commentary In The News

    StorageIO news

    A lot has been going on in the IT industry since the last StorageIO Update newsletter. The following are some StorageIO industry trends perspectives comments that have appeared in various venues. Cloud conversations continue to be popular including concerns about privacy, security and availability. Here are some comments at SearchCloudComputing: about moving on from cloud deployment heartbreak.

    Nand flash Solid State Devices (SSD) continue to increase in customer deployments, over at Processor, here are some here are some comments on Incorporating SSD’s Into Your Storage Plan. Also on SSD, here are some perspectives making the Argument For Flash-Based Storage. Some other comments over at Processer.com include looking At Disaster Recovery As A Service, tips to Avoid In Data Center Planning, making the most of Enterprise Virtualization, as well as New Tech, Advancements To Justify Servers. Part of controlling and managing storage costs is having timely insight, metrics that matter, here are some more perspectives and also here.

    Over at SearchVirtualStorage I have some comments on how to configure and manage storage for a virtual desktop environment (VDI) while over at TechPageOne there are perspectives on top reasons to switch to Windows 8. 

    Some other comments and perspectives are over at EnterpriseStorageForum including Top 10 Ways to Improve Data Center Energy Efficiency. At InfoStor there are comments and tips about Object Storage, while at SearchDataBackup I have some perspectives about Symantec being broken up.

    View other industry trends comments at the here

    Tips and Articles

    Recent Server and StorageIO tips and articles appearing in various venues include over at SearchCloudStorage a series of discussion often asked question pieces:

    Are you concerned with the security of the cloud?
    Is the cost of cloud storage really cheaper?
    What’s important to know about cloud privacy policy?
    Are more than five nines of availability really possible?
    What to look for enterprise file sync-and-share app?
    How primary storage clouds and cloud backup differ?
    What should I consider when using SSD cloud?
    What is difference between a snapshot and a clone?

    View other recent as well as past tips and articles here

    StorageIOblog posts

    Recent StorageIOblog posts include:

    View other recent as well as past blog posts here

    In This Issue

  • Industry Trends Perspectives
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events & Activities

    September 25, 2014
    MSP CMG – Flash and SSD performance

    October 8-10, 2014
    Nijkerk Netherlands Brouwer Seminar Series

    November 11-13, 2014
    AWS re:Invent Las Vegas

    View other recent and upcoming events here

    Webinars

    November 13 9AM PT
    BrightTalk – Software Defined Storage

    November 11 10AM PT
    Google+ Hangout Dell BackupU

    November 11 9AM PT
    BrightTak – Software Defined Data Centers

    October 16 9AM PT
    BrightTalk – Cloud Storage Decision Making

    October 15 1PM PT
    BrightTalk – Hybrid Cloud Trends

    October 7 11AM PT
    BackupU – Data Protection Management

    September 18 8AM CT
    Nexsan – Hybrid Storage

    September 18 9AM PT
    BrightTalk – Converged Storage

    September 17 1PM PT
    BrightTalk – DCIM

    September 16 1PM PT
    BrightTalk – Data Center Convergence

    September 16 Noon PT
    BrightTalk – BC, BR and DR

    September 16 1PM CT
    StarWind – SMB 3.0 & Microsoft SOFS

    September 16 9AM PT
    Google+ Hangout – BackupU – Replication

    September 2 11AM PT
    Dell BackupU – Replication

    Videos and Podcasts

    Docker for Smarties
    Video: Docker for Smarties

    StorageIO podcasts are also available via and at StorageIO.tv

    From StorageIO Labs

    Research, Reviews and Reports

    Enterprise 12Gbps SAS and SSD’s
    Better Together – Part of an Enterprise Tiered Storage Strategy

    In this StorageIO Industry Trends Perspective thought leadership white paper we look at how enterprise class SSD’s and 12Gbps SAS address current and next generation tiered storage for virtual, cloud, traditional Little and Big Data environments. This report includes proof points running various workloads including Database TPC-B, TPC-E, Microsoft Exchange in the StorageIO Labs along with cache software comparing SSD, SSHD and HDD’s. Read the  white paper  compliments of Seagate 1200 12Gbs SAS SSD’s.

    Seagate SSD White Paper

    Resources and Links

    Check out these useful links and pages:
    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageio.com/ssd
    storageio.com/ssd

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    VMware Cisco EMC VCE Zen and now server storage I/O convergence

    Storage I/O trends

    VMware Cisco EMC VCE Zen and now server storage I/O convergence

    In case you have not heard, the joint initiative (JV) founded in the fall of 2009 between Intel VMware Cisco and EMC called VCE had a change of ownership today.

    Well, kind of…

    Who is VCE and what’s this Zen stuff?

    For those not familiar or who need a recap, VCE was created to create converged server, storage I/O networking hardware and software solutions combing technologies from its investors resulting in solutions called vBlocks.

    The major investors were Cisco who provides the converged servers and I/O networking along with associated management tools as well as EMC who provides the storage systems along with their associated management tools. Minority investors include VMware (who is majority owned by EMC) who provides the server virtualization aka software defined data center management tools and Intel whose’s processor chip technologies are used in the vBlocks. What has changed from Zen (e.g. yesterday or in the past) and now is that Cisco has sold the majority (they are retaining about 10%) of its investment ownership in VCE to EMC. Learn more about VCE, their solutions and valueware in this post here (VCE revisited, now and Zen).

    Activist activating activity?

    EMC pulling VCE in-house which should prop up its own internal sales figures by perhaps a few billion USDs within a year or so (if not sooner) is not as appealing to activists investors who want results now such as selling off parts of the company (e.g. EMC, VMware or other assets) or the entire company.

    However EMC has been under pressure from activist shareholder Elliot Management to divest or sell-off portions of this business such as VMware so that the investors (including the activist) can make more money. For example there have been the recent stories about EMC looking to sell or merge with the likes of HP (who is now buying back shares and splitting up its own business) among others which certainly must make the activist investors happy.

    However to the activist investors who want to see things sold to make money they are not happy with EMC off buying or investing it appears.

    Via Bloomberg

    “The last thing on investors’ minds is the future of VCE,” Daniel Ives, an analyst with FBR Capital Markets, wrote in a note today. “EMC has a fire in its house right now and the company appears focused on painting its bedroom (e.g. VCE), while the Street wants a resolution on the strategic ownership situation sooner rather than later.”

    Read more at Bloomberg

    Whats this EMC Federation stuff?

    Note that EMC has organized itself into a federation that consists of EMC Information Infrastructure (EMCII) or what you might know a traditional EMC based storage and related software solutions, VMware, Pivotal and RSA. Also note that each of those federated companies have their own CEO as well as have holdings or ownership of other companies. However all report to a common federated leadership aka EMC. Thus when you hear EMC that could mean depending on the context the federation mother ship which controls the individual companies, or it could also be used to refer to EMCII aka the traditional EMC. Click here to learn more about the EMC federation.

    Converging Markets and Opportunities

    Looking beyond near-term or quick gains, EMC could be simply doing something others do to take ownership and control over certain things while reducing complexities associated with joint initiatives. For example with EMC and Cisco in a close partnership with VCE, both parties have been free to explore and take part in other joint initiatives such as Cisco with EMC competitors NetApp, HDS among others. Otoh EMC partners with Arista for networking, not to mention via VMware acquired virtual network or software defined network Nicira now called NSX.

    server and storage I/O road map to convergence

    EMC is also in a partnership with Lenovo for developing servers to be used by EMC for various platforms to support storage, data and information services while shifting the lower-end SMB storage offerings such as Iomega to the Lenovo channel.

    Note that Lenovo is in the process of absorbing the IBM xSeries (e.g. x86 based) business unit that started closing earlier in October (will take several months to completely close in all countries around the world). For its part Cisco is also partnering with hyper-converged solution provider Simplivity while EMC has announced its statement of direction to bring to market its own hyper-converged platform by end of the year. For those not familiar, Hyper-converged solutions are simply the next evolution of converged or pre-bundled turnkey systems (some of you might have just had a Dejavu moment) that today tend to be targeted for SMBs and ROBOs however used for targeted applications such as VDI in larger environments.

    Storage I/O trends

    What does this have to do with VCE?

    IF EMC is about to release as it has made statement of direction statements of a hyper-converged solution by year-end to compete head-on with those from Nutanix, Simplivity and Tintri as well as perhaps to a lesser extent VMwares EVO:Rail, by having more control over VCE means reducing if not eliminating complexity around vBlocks which are Cisco based with EMC storage vs. what ever EMC brings to market for hyper-converged. In the past under the VCE initiatives storage was limited to EMC and servers along with networking from Cisco, hypervisors from VMware, however what happens in the future remains to be seen.

    Does this mean EMC is moving even more into servers than just virtual servers?

    Tough to say as EMC can not afford to have its sales force lose focus on its traditional core products while ramping up other business, however, the EMC direct and partner teams want and need to keep up account control which means gaining market share and footprint in those accounts. This also means EMC needs to find ways to take cost out of the sales and marketing process where possible to streamline which perhaps brining VCE will help do.

    Will this perhaps give the EMC direct and partner sales teams a new carrot or incentive to promote converged and hyper-converged at the cost of other competitors or incumbents? Perhaps, lets see what happens in the coming weeks.

    What does this all mean?

    In a nut shell, IMHO EMC is doing a couple of things here one of which is cleaning up some ownership in JVs to give it self more control, as well as options for doing other business transactions (mergers and acquisitions (M&A), sales or divestiture’s, new joint initiatives, etc). Then there is streamline its business from decision-making to quickly respond to new opportunities as well as routes to markets and other activities (e.g. removing complexity and cost vs. simply cutting cost).

    Does this signal the prelude to something else? Perhaps, we know that EMC has made a statement of direction about hyper-converged which with VCE now more under EMC control, perhaps we will see more options from under the VCE umbrella both for lower-end and entry SMB as well as SME and large enterprise organizations.

    What about the activist investors?

    They are going to make noise as long as they can continue to make more money or get what they want. Publicly I would be shocked if the activist investors were not making statements that EMC should be selling assets not buying or investing.

    On the other hand, any smart investor,  financial or other analyst should see though the fog of what this relatively simple transaction means in terms of EMC getting further control of its future.

    Of course the question will stay does EMC remain in control of its current federation of EMC, VMware, Pivotal, RSA along each of their respective holdings, does EMC doe a block buster merger, divestiture or acquisition?

    server and storage I/O road ahead

    Take a step back, look at the big picture!

    Some things to keep an eye on:

    • Will this move help streamline decision-making enabling new solutions to be brought to market and customers quicker?
    • While there is a VMware focus, don’t forget about the long-running decades old relationship with Microsoft and how that plays into the equation
    • Watch for what EMC releases with their hyper-converged solution as well as where it is focused, not to mention how sold
    • Also watch the EMC and Lenovo join initiative, both for the Iomega storage activity as well as what EMC and Lenovo do with and for servers
    • Speaking of Lenovo, unless I missed something as of the time of writing this, have you noticed that Lenovo is not yet part of the VMware EVO:Rail initiative?

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved