What industry pundits love and loathe about data storage

Drew Robb has a good article about what IT industry pundits including vendors, analysts, and advisors loath including comments from myself.

In the article Drew asks: What do you really love about storage and what are your pet peeves?

One of my comments and perspectives is that I like Hybrid Hard Disk Drives (HHDDs) in addition to traditional Hard Disk Drives (HDD) along with Solid State Devices (SSDs). As much as I like HHDDs, I also believe that with any technology, they are not the best solution for everything, however they can also be used in many ways than being seen. Here is the fifth installment of a series on HHDDs that I have done since June 2010 when I received my first HHDD a Seagate Momentus XT. You can read the other installments of my momentus moments here, here, here and here.

Seagate Momentus XT
HHDD with integrated nand flash SSD photo courtesy Seagate.com

Molly Rector VP of marketing at tape summit resources vendor Spectra Logic mentioned that what she does not like is companies that base their business plan on patent law trolling. I would have expected something different along the lines of countering or correcting people that say tape sucks, tape is dead, or that tape is the cause problem of anything wrong with storage thus clearing the air or putting up a fight that tape summit resources. Go figure…

Another of my comments involved clouds of which there are plenty of conversations taking place. I do like clouds (I even recently wrote a book involving them) however Im a fan of using them where applicable to coexist and enhance other IT resources. Dont be scared of clouds, however be ready, do your homework, listen, learn, do proof of concepts to decide best practices, when, where, what and how to use them.

Speaking of clouds, click here to read about who is responsible for cloud data loss and cast your vote, along with viewing what do you think about IT clouds in general here.

Mike Karp (aka twitter @storagewonk ) an analyst with Ptak Noel mentions that midrange environments dont get respect from big (or even startup) vendors.

I would take that a step further by saying compared to six or so years ago, SMB are getting night and day better respect along with attention by most vendors, however what is lacking is respect of the SOHO sector (e.g. lower end of SMB down to or just above consumer).

Granted some that have traditional sold into those sectors such as server vendors including Dell and HP get it or at least see the potential along with traditional enterprise vendor EMC via its Iomega . Yet I still see many vendors including startups in general discounting, shrugging off or sneering at the SOHO space similar to those who dissed or did not respect the SMB space several years ago. Similar to the SMB space, SOHO requires different products, packaging, pricing and routes to market via channel or etail mechanisms which means change for some vendors. Those vendors who embraced the SMB and realized what needed to change to adapt to those markets will also stand to do better with the SOHO.

Here is the reason that I think SOHO needs respect.

Simple, SOHOs grow up to become SMBs, SMBs grow up to become SMEs, SMEs grow up to become enterprises and not to mention that the amount of data being generated, moved, processed and stored continues to grow. The net result is that SMBs along with SOHO storage demands will continue to grow and for those vendors who can adjust to support those markets will also stand to gain new customers that in turn can become plans for other solution offerings.

Cloud conversations

Not surprising Eran Farajun of Asigra which has been doing cloud backups decades before they were known as clouds loves backup (and restores). However I am surprised that Eran did not jump on the its time to modernize and re architect data protection theme. Oh well, will have to have a chat with Eran on that sometime.

What was surprising were comments from Panzura who has a good distributed (e.g. read also cloud) file system that can be used for various things including online reference data. Panzura has a solution that normally I would not even think about in the context of being pulled into a Datadomain or dedupe appliance type discussion (e.g tape sucks or other similar themes). So it is odd that they are playing to the tape sucks camp and theme vs. playing to where the technology can really shine which IMHO is in the global, distributed, scale out and cloud file system space. Oh well, I guess you go with what you know or has worked in the past to get some attention.

Molly Rector of Spectra also mentioned that she likes High Performance Computing, surprised that she did not throw in high productivity computing as well in conjunction with big data, big bandwidth, green, dedupe, power, disk, tape and related buzzword bingo terms.

Also there are some comments from myself about cost cutting.

While I see the need for organizations to cut costs during tough economic times, Im not a fan of simply cutting cost for the sake of cost cutting as opposed to finding and removing complexity that in turn remove costs of doing work. In other words, Im a fan of finding and removing waste, becoming more effective and productive along with removing the cost of doing a particular piece of work. This in the end meets the aim of bean counters to cut costs, however can be done in a way that does not degrade service levels or customer service experience. For example instead of looking to cut backup costs, do you know where the real costs of doing data protection exist (hint swapping out media is treating the symptoms) and if so, what can be done to streamline those from the source of the problem downstream to the target (e.g. media or medium). In other words, redesign, review, modernize how data protection is done, leverage data footprint reduction (DFR) techniques including archive, compression, consolidation, data management, dedupe and other technologies in effective and creative ways, after all, return on innovation is the new ROI.

Checkout Drews article here to read more on the above topics and themes.

Ok, nuff said for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

New Seagate Momentus XT Hybrid drive (SSD and HDD)

Seagate recently announced the next generation Momentus XT Hybrid Hard Disk Drive (HHDD) with a capacity of 750GB in a 2.5 inch form factor and MSRP of $245.00 USD including integrated NAND flash solid state device (SSD). As a refresher, the Momentus XT is a HHDD in that it includes a 4GB nand flash SSD integrated with a 500GB (or larger) 7,200 RPM hard disk drive (HDD) in a single 2.5 inch package.

Seagate Momentus XT
HHDD with integrated nand flash SSD photo courtesy Seagate.com

This is the fifth installment of a series that I have done since June 2010 when I received my first HHDD a Seagate Momentus XT. You can read the other installments of my momentus moments here, here, here and here.

Whats is new with the new generation.
Besides extra storage space capacity up to 750GB (was 500GB), there is twice as much single level cell (SLC) nand flash memory (8GB vs. 4GB in previous generation) along with an enhanced interface using 6Gb per second SATA that supports native command queuing (NCQ) for better performance. Note that NCQ was available on the previous generation Momentus XT that used a 3Gb SATA interface. Other enhancements include a larger block or sector size of 4096 bytes vs. traditional 512 bytes on previous generation storage devices.

This bigger sector size results in less overhead with managing data blocks on large capacity storage devices. Also new are caching enhancements are FAST Factor Flash Management, FAST Factor Boot and Adaptive Memory Technology. Not to be confused with EMC Fully Automated Storage Tiering the other FAST; Seagate FAST is technology that exists inside the storage drive itself. FAST Factor boot enables systems to boot and be productive with speeds similar to SSD or several times faster than traditional HDDs.

The FAST Factor Flash Management provides the integrated intelligence to maximize use of the nand flash or SSD capabilities along with spinning HDD to boot performance, keep up compatibility with different systems and their operating systems. In addition to performance and interoperability, data integrity and SSD flash endurance are also enhanced for investment protection. The Adaptive Memory technology is a self learning algorithm to give SSD like performance for often used applications and data to close the storage capacity too performance gap that has increased along with data center bottlenecks.

Some questions and discussion comments:

When to use SSD vs. HDD vs. HHDD?
If you need the full speed of SSD to boost performance across all data access and cost is not an issue for available capacity that is where you should be focused. However if you are looking for lowest total cost of storage capacity with no need for performance, than lower cost high capacity HDDs should be on your shopping list. On the other hand, if you want a mix of performance and capacity at an effective price, than HHDDs should be considered.

Why the price jump compared to first generation HHDD?
IMHO, it has a lot to do with current market conditions, supply and demand.

With recent floods in Thailand and forecasted HDD and other technology shortages, the lay of supply and demand applies. This means that the supply may be constrained for some products causing demand to rise for others. Your particular vendor or supplier may have inventory however will be less likely to heavily discount while there are shortages or market opportunities to keep prices high. There are already examples of this if you check around on various sites to compare prices now vs. a few months ago. Granted it is the holiday shopping season for both people as well as organizations spending the last of their available budgets so more demand for available supplies.

What kind of performance or productivity have I seen with HHDDs?
While I have not yet tested and compared the second generation or new devices, I can attest to the performance improvements resulting in better productivity over the past year using Seagate Momentus XT HHDDs compared to traditional HDDs. Here is a post that you can follow to see some boot performance comparisons as part of some virtual desktop infrastructure (VDI) sizing testing I did earlier this year that included both HHDD and HDD.

HHDD desktop 1

HDD desktop 1

HHDD desktop 2

Avg. IOPS

334

69 to 113

186 to 353

Avg. MByte sec

5.36

1.58 to 2.13

2.76 to 5.2

Percent IOPS read

94

80 to 88

92

Percent MBs read

87

63 to 77

84

Mbytes read

530

201 to 245

504

Mbytes written

128

60 to 141

100

Avg. read latency

2.24ms

8.2 to 9.5ms

1.3ms

Avg. write latency

10.41ms

20.5 to 14.96ms

8.6ms

Boot duration

120 seconds

120 to 240 sec

120

Click here to read the entire post about the above table

When will I jump on the SSD bandwagon?
Great question, I have actually been on the SSD train for several decades using them, selling them, covering, analyzing and consulting around them along with other storage mediums including HDD, HHDD, cloud and tape. I have some SSDs and will eventually put them into my laptops, workstations and servers as primary storage when the opportunity makes sense.

Will HHDDs help backup and other data protection tasks?
Yes, in fact I initially used my Momentus XTs as backup or data protection targets along with for moving large amounts of data between systems faster than what my network could support.

Why not use a SSD?
If you need the performance and can afford the price, go SSD!

On the other hand, if you are looking to add a small 64GB, 128GB or even 256GB SSD while retaining a larger capacity, slower and lower cost HDD, an HHDD should be considered as an option. By using an HHDD instead of both a SSD and HDD, you will cut the need of figuring out how to install both in space constrained laptops, desktop or workstations. In addition, you will cut the need to either manually move data between the different devices or avoid having to acquire software or drivers to do that for you.

How much does the new Seagate Momentus XT HHDD cost?
Manufactures Suggested Retail Price (MSRP) is listed at $245 for a 750GB version.

Does the Momentus XT HHDD need any special drivers, adapters or software?
No, they are plug and play. There is no need for caching or performance acceleration drivers, utilities or other software. Likewise no needs for tiering or data movement tools.

How do you install an HHDD into an existing system?
Similar to installing a new HDD to replace an existing one if you are familiar with that process. If not, it goes like this (or uses your own preferred approach).

  • Attach a new HHDD to an existing system using a cable
  • Utilize a disk clone or image tool to make a copy of the existing HDD to HHDD
  • Note that the system may not be able to be used during the copy, so plan ahead.
  • After the clone or image copy is made, shutdown system, remove existing HDD and replace it with the HHDD that was connected to the system during the copy (remember to remove the copy cable).
  • Reboot the system to verify all is well, note that it will take a few reboots before the HHDD will start to learn your data and files along with how they are used.
  • Regarding your old HDD, save it, put it in a safe place and use it as a disaster recovery (DR) backup. For example if you have a safe deposit box or somewhere else safe, put it there for when you will need it in the future.


Seagate Momentus XT and USB to SATA cable

Can an HHDD fit into an existing slot in a laptop, workstation or server?
Yes. In fact, unlike a HDD and SSD combination, that requires multiple slots or forcing one device to be external, HHDDs like the Momentus XT simply use the space where your current HDD is installed.

How do you move data to it?
Beyond the first installation described above, the HHDD appears as just another local device meaning you can move data to or from it like any other HDD, SSD or CD.

Do you need automated tiering software?
No, not unless you need it for some other reason or if you want to use an HHDD as the lower cost, larger capacity option as a companion to a smaller SSD.

Do I have any of the new or second generation HHDDs?
Not yet, maybe soon and I will do another momentus moment point when that time arrives. For the time being, I will continue to use the first generation Momentus XT HHDDs

Bottom line (for now), If you are considering a large capacity, HDDs check out the HHDDs for an added performance boost including faster boot times as well as accessing other data quicker.

On the other hand if you want an SSD however your budget restricts you to a smaller capacity version, look into how an HHDD can be a viable option for some of your needs.

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

How to write, publish and promote a book or blog

Have you ever read an article, blog post or a book and said to yourself that you could do that, perhaps even better?

Well, unless you have already done so, what are you waiting for to write a book, blog, article or create some other form of content using different mediums or venues?

The other evening I attended a local Stillwater (Artreach St Croix) event (Publishers Forum) with my wife (karenofarcola.com). Karen is working on getting her first book (fiction for children and young adults) published so she was interested in meeting the different publishers. For me I wanted to learn about the local publishers, hear what they had to say in addition to meeting the purveyor of a local book store (Valley Book Seller) who helped promote the event.

It was interesting listening to the panel made up of a nonprofit publisher (Milkweed Editions), a full service self publishing venue (Beaver Pond Press) and regional publishing house (Tristin Publishing).

Having formally published books (e.g. with traditional publishers (Elseiver and CRC/Taylor Francis), ISBNs, Library of Congress (LOC) registration) along with contributing on other projects, not to mention over a thousand articles, tips, reports, white papers, solution briefs, videos and other content, I often get asked what does it take to write a book, blog or other material.

Intel reccomended readingCloud and Virtual Data Storage Networking (CRC Press)

I also get told by people that they could do a better job to which I ask them then why dont they do something about it vs. simply saying that they could do something better.

Back to the Art Reach St Croix publishers forum event, the attendees were mainly aspiring authors looking to get their first works published. Having already been down the path that many in the room were looking to go (get published) it was interesting to hear the various questions and discussion topics. Some of those questions were about the process of self publishing vs. working with the publisher (large or small) in addition to how much costs or how to get discovered. It was also great to hear the panelist discuss some of the hurdles authors face in getting their books published along with promoting their works.

I learned several years ago before I did my first solo book was a tip that another author told me of the importance of promotion. That is your publisher will help enable, however it is up to you the author to promote your works by creating a platform or means of interacting with different audiences. Consequently it was fun to hear the panelist talk with the authors on the importance of creating a platform including a blog, twitter, Google Plus, facebook, doing articles and appearances to help create awareness. What was fun to watch were the authors who seemed to be more comfortable with creating their works and then waiting for the results to occur as opposed to helping make their work a success.

Anyways, for those who are aspiring to write a book, blog or article, or even for those who are content being arm chair authors or Monday morning quarterbacks, here is a link to a series about how to write a book or blog. The series (how to write a book or blog) can be read over at the VMware communities site that Im contributing for as a vExpert.

Oh, and for you aspiring authors or bloggers wondering about creating and developing a platform, what you are reading here is an example of doing just that. In other words, my platform includes what you are reading here in addition to on my regular blog or other venues including Google Plus (G+), Facebook, LinkedIn and twitter among other venues.

So what are you waiting for, go get your book or blog or article written, published and start promoting it.

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

IT and technology turkeys

Now that Halloween and talk of Zombies has past (at least for now), that means next up on the social or holiday calendar topics in the U.S. is thanksgiving which means turkey themes.

With turkey themes in mind, how about some past, current and maybe future technology flops or where are they now.

A technology turkey can be a product, trend, technique or theme that was touted (or hyped) and flopped for various reasons not flying up to, or meeting its expectations. That means that a technology turkey may have had industry adoption however lacked customer deployment.

Lets try a few, how about holographic storage, or is that still a future technology?

Were NEXT computer and the Apple Newton turkeys?

Disclosure: I have a Newton that has not been used since the mid 90s.

Is ATA over Ethernet (AoE) a future turkey candidate along with FCoE aka Fibre Channel over Ethernet (or here or here), or is that just some peoples wishful thinking regarding FCoE being a turkey?

Speaking of AoE, what ever happened to Zetera (aka Hammer storage) the iSCSI alternative of a few years ago?

To be fair how about IPFC not to be confused with FCIP (Fibre Channel frames mapped to IP for distance) or iFCP not to be confused with FCoE or iSCSI. IPFC mapped IP as upper level protocol (ULP) onto Fibre Channel coexisting with FCP and FICON. There were only a few adopters of IPFC that used it as a low latency channel to channel (CTC) mechanism for open systems before InfiniBand and other technologies matured.

Im guessing that someone will step up to defend the honor of Microsoft Windows Vista, however until then, IMHO it is or was a Turkey. While on the topic of operating systems, anyone have an opinion on IBMs OS2? Speaking of PCs, how about the DEC Rainbow and its sibling the Robin? Remember when IBM was in the PC business before selling it off to Lenovo, how about the IBM PCjr, turkey candidate or not?

HP should be on the turkey list with their now ex CEO Leo Apotheker whom they put out to pasture, on the technology front, anybody remember AutoRAID?

How about the Britton Lee Database machine which today would be referred to as a storage appliance or application optimized storage system such as the Oracle Exadata II (or Oracle Exadata I based on HP hardware) among others. Note that Im not saying Exadata I or Exadata II are turkeys as that will be left to your own determination. Both are cool from a technology standpoint, however there is more to having neat or interesting technology to move from announcement to industry adoption to customer deployment, things that Oracle has been having some success with.

Speaking of Oracle, remember when Sun bought the Encore storage system and renamed it the A7000 (not to be confused with the A5000 aka Photon) in an attempt to compete against the EMC Symmetrix. The Encore folks after Sun went on to their next project and still today call it DataCore. Meanwhile Sun discontinued the A7000 after a period of time similar to what they did with other acquisitions such as Pirus which became the 6920 which was end of lifed as part of a deal where Sun increased their resell activity of HDS which too has since been archived. Hmmm, that begs the question of what happens with Oracle acquiring Pillar with an earn out scheme where if there is revenue there is a payout, if there is no revenue then there is a tax write off.

What about big data, will that become a turkey following in the footsteps of other former high flyers such as cloud, virtualization, data classification, CDP, Green IT and SOA among many others. IMHO that depends upon what your view or definition along with expectations of big data is as a buzzword bingo topic. Depending on your view, that will determine if the above will join others that fade away from the limelight shifting into productive modes for customers and profitable activity for vendors.

Want to read what others have to say about technology turkeys or flops?

Here is what ibitimes has to say about technology flops (aka) turkeys, with Infoworlds lineup here, Computerworlds list is here. Meanwhile a couple from mashable here and here, Cnet weighs in here, with another list over at investorplace found here, and checkout the list at Money here with the telegraph represented here. Of course you could Google to find more however you would probably also stumble upon Googles own flops or technology turkeys including wave.

What is your take as to other technology turkeys past, present or future?

Ok, nuff said for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

The blame game: Does cloud storage result in data loss?

I recently came across a piece by Carl Brooks over at IT Tech News Daily that caught my eye, title was Cloud Storage Often Results in Data Loss. The piece has an effective title (good for search engine: SEO optimization) as it stood out from many others I saw on that particular day.

Industry Trend: Cloud storage

What caught my eye on Carls piece is that it reads as if the facts based on a quick survey point to clouds resulting in data loss, as opposed to being an opinion that some cloud usage can result in data loss.

Data loss

My opinion is that if not used properly including ignoring best practices, any form of data storage medium or media could result or be blamed for data loss. For some people they have lost data as a result of using cloud storage services just as other people have lost data or access to information on other storage mediums and solutions. For example, data has been lost on tape, Hard Disk Drives (HDDs), Solid State Devices (SSD), Hybrid HDDs (HHDD), RAID and non RAID, local and remote and even optical based storage systems large and small. In some cases, there have been errors or problems with the medium or media, in other cases storage systems have lost access to, or lost data due to hardware, firmware, software, or configuration including due to human error among other issues.

Data loss

Technology failure: Not if, rather when and how to decrease impact
Any technology regardless of what it is or who it is from along with its architecture design and implementation can fail. It is not if, rather when and how gracefully along with what safeguards to decrease the impact, in addition to containing or isolating faults differentiates various products or solutions. How they automatically repair and self heal to keep running or support accessibility and maintain data integrity are important as is how those options are used. Granted a failure may not be technology related per say, rather something associated with human intervention, configuration, change management (or lack thereof) along with accidental or intentional activities.

Walking the talk
I have used public cloud storage services for several years including SaaS and AaaS as well as IaaS (See more XaaS here) and knock on wood, have not lost any data yet, loss of access sure, however not data being lost.

I follow my advice and best practices when selecting cloud providers looking for good value, service level agreements (SLAs) and service level objectives (SLOs) over low cost or for free services.

In the several years of using cloud based storage and services there has been some loss of access, however no loss of data. Those service disruptions or loss of access to data and services ranged from a few minutes to a little over an hour. In those scenarios, if I could not have waited for cloud storage to become accessible, I could have accessed a local copy if it were available.

Had a major disruption occurred where it would have been several days before I could gain access to that information, or if it were actually lost, I have a data insurance policy. That data insurance policy is part of my business continuance (BC) and disaster recovery (DR) strategy. My BC and DR strategy is a multi layered approach combining local, offline and offsite as along with online cloud data protection and archiving.

Assuming my cloud storage service could get data back to a given point (RPO) in a given amount of time (RTO), I have some options. One option is to wait for the service or information to become available again assuming a local copy is no longer valid or available. Another option is to start restoration from a master gold copy and then roll forward changes from the cloud services as that information becomes available. In other words, I am using cloud storage as another resource that is for both protecting what is local, as well as complimenting how I locally protect things.

Minimize or cut data loss or loss of access
Anything important should be protected locally and remotely meaning leveraging cloud and a master or gold backup copy.

To cut the cost of protecting information, I also leverage archives, which mean not all data gets protected the same. Important data is protected more often reducing RPO exposure and speed up RTO during restoration. Other data that is not as important is protected, however on a different frequency with other retention cycles, in other words, tiered data protection. By implementing tiered data protection, best practices, and various technologies including data footprint reduction (DFR) such as archive, compression, dedupe in addition to local disk to disk (D2D), disk to disk to cloud (D2D2C), along with routine copies to offline media (removable HDDs or RHDDs) that go offsite,  Im able to stretch my data protection budget further. Not only is my data protection budget stretched further, I have more options to speed up RTO and better detail for recovery and enhanced RPOs.

If you are looking to avoid losing data, or loss of access, it is a simple equation in no particular order:

  • Strategy and design
  • Best practices and processes
  • Various technologies
  • Quality products
  • Robust service delivery
  • Configuration and implementation
  • SLO and SLA management metrics
  • People skill set and knowledge
  • Usage guidelines or terms of service (ToS)

Unfortunately, clouds like other technologies or solutions get a bad reputation or blamed when something goes wrong. Sometimes it is the technology or service that fails, other times it is a combination of errors that resulted in loss of access or lost data. With clouds as has been the case with other storage mediums and systems in the past, when something goes wrong and if it has been hyped, chances are it will become a target for blame or finger pointing vs. determining what went wrong so that it does not occur again. For example cloud storage has been hyped as easy to use, don’t worry, just put your data there, you can get out of the business of managing storage as the cloud will do that magically for you behind the scenes.

The reality is that while cloud storage solutions can offload functions, someone is still responsible for making decisions on its usage and configuration that impact availability. What separates various providers is their ability to design in best practices, isolate and contain faults quickly, have resiliency integrated as part of a solution along with various SLAs aligned to what the service level you are expecting in an easy to use manner.

Does that mean the more you pay the more reliable and resilient a solution should be?
No, not necessarily, as there can still be risks including how the solution is used.

Does that mean low cost or for free solutions have the most risk?
No, not necessarily as it comes down to how you use or design around those options. In other words, while cloud storage services remove or mask complexity, it still comes down to how you are going to use a given service.

Shared responsibility for cloud (and non cloud) storage data protection
Anything important enough that you cannot afford to lose, or have quick access to should be protected in different locations and on various mediums. In other words, balance your risk. Cloud storage service provider toned to take responsibility to meet service expectations for a given SLA and SLOs that you agree to pay for (unless free).

As the customer you have the responsibility of following best practices supplied by the service provider including reading the ToS. Part of the responsibility as a customer or consumer is to understand what are the ToS, SLA and SLOs for a given level of service that you are using. As a customer or consumer, this means doing your homework to be ready as a smart educated buyer or consumer of cloud storage services.

If you are a vendor or value added reseller (VAR), your opportunity is to help customers with the acquisition process to make informed decision. For VARs and solution providers, this can mean up selling customers to a higher level of service by making them aware of the risk and reward benefits as opposed to focus on cost. After all, if a order taker at McDonalds can ask Would you like to super size your order, why cant you as a vendor or solution provider also have a value oriented up sell message.

Additional related links to read more and sources of information:

Choosing the Right Local/Cloud Hybrid Backup for SMBs
E2E Awareness and insight for IT environments
Poll: What Do You Think of IT Clouds?
Convergence: People, Processes, Policies and Products
What do VARs and Clouds as well as MSPs have in common?
Industry adoption vs. industry deployment, is there a difference?
Cloud conversations: Loss of data access vs. data loss
Clouds and Data Loss: Time for CDP (Commonsense Data Protection)?
Clouds are like Electricity: Dont be scared
Wit and wisdom for BC and DR
Criteria for choosing the right business continuity or disaster recovery consultant
Local and Cloud Hybrid Backup for SMBs
Is cloud disaster recovery appropriate for SMBs?
Laptop data protection: A major headache with many cures
Disaster recovery in the cloud explained
Backup in the cloud: Large enterprises wary, others climbing on board
Cloud and Virtual Data Storage Networking (CRC Press, 2011)
Enterprise Systems Backup and Recovery: A Corporate Insurance Policy

Poll:  Who is responsible for cloud storage data loss?

Taking action, what you should (or not) do
Dont be scared of clouds, however do your homework, be ready, look before you leap and follow best practices. Look into the service level agreements (SLAs) associated with a given cloud storage product or service. Follow best practices about how you or someone else will protect what data is put into the cloud.

For critical data or information, consider having a copy of that data in the cloud as well as at or in another place, which could be in a different cloud or local or offsite and offline. Keep in mind the theme for critical information and data is not if, rather when so what can be done to decrease the risk or impact of something happening, in other words, be ready.

Data put into the cloud can be lost, or, loss of access to it can occur for some amount of time just as happens with using non cloud storage such as tape, disk or ssd. What impacts or minimizes your risk of using traditional local or remote as well as cloud storage are the best practices, how configured, protected, secured and managed. Another consideration is the type and quality of the storage product or cloud service can have a big impact. Sure, a quality product or service can fail; however, you can also design and configure to decrease those impacts.

Wrap up
Bottom line, do not be scared of cloud storage, however be ready, do your homework, review best practices, understand benefits and caveats, risk and reward. For those who want to learn more about cloud storage (public, private and hybrid) along with data protection, data management, data footprint reduction among other related topics and best practices, I happen to know of some good resources. Those resources in addition to the links provided above are titled Cloud and Virtual Data Storage Networking (CRC Press) that you can learn more about here as well as find at Amazon among other venues. Also, check out Enterprise Systems Backup and Recovery: A Corporate Insurance Policy by Preston De Guise (aka twitter @backupbear ) which is a great resource for protecting data.

Ok, nuff said for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Trick or treat: 2011 IT Zombie technology poll

Warning: Do not be scared, however be ready for some trick and treat fun, it is after all, the Halloween season.

I like new emerging technologies and trends along with Zombie technologies, you know, those technologies that have been declared dead yet are still being enhanced, sold and used.

Zombie technologies as a name may be new for some, while others will have a realization of experiencing something from the past, technologies being declared deceased yet still alive and being used. Zombie technologies are those that have been declared dead, yet still alive enabling productivity for customers that use them and often profits for the vendors who sell them.

Zombie technologies

Some people consider a technology or trend dead once it hits the peak of hype as that can signal a time to jump to the next bandwagon or shiny new technology (or toy).

Others will see a technology as being dead when it is on the down slope of the hype curve towards the trough of disillusionment citing that as enough cause for being deceased.

Yet others will declare something dead while it matures working its way through the trough of disillusionment evolving from market adoption to customer deployment eventually onto the plateau of productivity (or profitability).

Then there are those who see something as being dead once it finally is retired from productive use, or profitable for sale.

Of course then there are those who just like to call anything new or other than what they like or that is outside of their comfort zone as being dead. In other words, if your focus or area of interest is tied to new products, technology trends and their promotion, rest assured you better be where the resources are being applied and view other things as being dead and thus probably not a fan of Zombie technologies (or at least publicly).

On the other hand, if your area of focus is on leveraging technologies and products in a productive way, including selling things that are profitable without a lot of marketing effort, your view of what is dead or not will be different. For example if you are risk averse letting someone else be on the leading bleeding edge (unless you have a dual redundant HA blood bank attached to your environment) your view of what is dead or not will be much different from those promoting the newest trend.

Funny thing about being declared dead, often it is not the technology, implementation, research and development or customer acquisitions, rather simply a lack of promotion, marketing and general awareness. Take tape for example which has been a multi decade member of the Zombie technology list. Recently vendors banded together investing or spending on marketing awareness reaching out to say tape is alive. Guess what, lo and behold, there was a flurry of tape activity in venues that normally might not be talking about tape. Funny how marketing resources can bring something back from the dead including Zombie technologies to become popular or cool to discuss again.

With the 2011 Halloween season among us, it is time to take a look this years list of Zombie technologies. Keep in mind that being named a Zombie technology is actually an honor in that it usually means someone wants to see it dead so that his or her preferred product or technology can take it place.

Here are 2011 Zombie technologies.

Backup: Far from being dead, its focus is changing and evolving with a broader emphasis on data protection. While many technologies associated with backup have been declared dead along with some backup software tools, the reality is that it is time or modernizes how backups and data protection are performed. Thus, backup is on the Zombie technology list and will live on, like it or not until it is exorcised from, your environment replaced with a modern resilient and flexible protected data infrastructure.

Big Data: While not declared dead yet, it will be soon by some creative marketer trying to come up with something new. On the other hand, there are those who have done big data analytics across different Zombie platforms for decades which of course is a badge of honor. As for some of the other newer or shiny technologies, they will have to wait to join the big data Zombies.

Cloud: Granted clouds are still on the hype cycle, some argue that it has reached its peak in terms of hype and now heading down into the trough of disillusionment, which of course some see as meaning dead. In my opinion cloud, hype has or is close to peaking, real work is occurring which means a gradual shift from industry adoption to customer deployment. Put a different way, clouds will be on the Zombie technology list of a couple of decades or more. Also, keep in mind that being on the Zombie technology list is an honor indicating shift towards adoption and less on promotion or awareness fan fare.

Data centers: With the advent of the cloud, data centers or habitats for technology have been declared dead, yet there is continued activity in expanding or building new ones all the time. Even the cloud relies on data centers for housing the physical resources including servers, storage, networks and other components that make up a Green and Virtual Data Center or Cloud environment. Needless to day, data centers will stay on the zombie list for some time.

Disk Drives: Hard disk drives (HDD) have been declared dead for many years and more recently due to popularity of SSDs have lost their sex appeal. Ironically, if tape is dead at the hands of HDDs, then how can HDDs be dead, unless of course they are on the Zombie technology list. What is happening is like tape, HDDs role are changing as the technology continues to evolve and will be around for another decade or so.

Fibre Channel (FC): This is a perennial favorite having been declared dead on a consistent basis over three decades now going back to the early 90s. While there are challengers as there have been in the past, FC is far from dead as a technology with 16 Gb (16GFC) now rolling out and a transition path for Fibre Channel over Ethernet (FCoE). My take is that FC will be on the zombie list for several more years until finally retired.

Fibre Channel over Ethernet (FCoE): This is a new entrant and one uniquely qualified for being declared dead as it is still in its infancy. Like its peer FC which was also declared dead a couple of decades ago, FCoE is just getting started and looks to be on the Zombie list for a couple of decades into the future.

Green IT: I have heard that Green IT is dead, after all, it was hyped before the cloud era which has been declared dead by some, yet there remains a Green gap or disconnect between messaging and issues thus missed opportunities. For a dead trend, SNIA recently released their Emerald program which consists of various metrics and measurements (remember, zombies like metrics to munch on) for gauging energy effectiveness for data storage. The hype cycle of Green IT and Green storage may be dead, however Green IT in the context of a shift in focus to increased productivity using the same or less energy is underway. Thus Green IT and Green storage are on the Zombie list.

iPhone: With the advent of Droid and other smart phones, I have heard iPhones declared dead, granted some older versions are. However while the Apple cofounder Steve Jobs has passed on (RIP), I suspect we will be seeing and hearing more about the iPhone for a few years more if not longer.

IBM Mainframe: When it comes to information technology (IT), the king of the Zombie list is the venerable IBM mainframe aka zSeries. The IBM mainframe has been declared dead for over 30 years if not longer and will be on the zombie list for another decade or so. After all, IBM keeps investing in the technology as people buy them not to mention IBM built a new factory to assemble them in.

NAS: Congratulations to Network Attached Storage (NAS) including Network File System (NFS) and Windows Common Internet File System (CIFS) aka Samba or SMB for making the Zombie technology list. This means of course that NAS in general is no longer considered an upstart or immature technology; rather it is being used and enhanced in many different directions.

PC: The personal computer was touted as killing off some of its Zombie technology list members including the IBM mainframe. With the advent of tablets, smart phones, virtual desktops infrastructures (VDI), the PC has been declared dead. My take is that while the IBM mainframe may eventually drop of the Zombie list in another decade or two if it finds something to do in retirement, the PC will be on the list for many years to come. Granted, the PC could live on even longer in the form of a virtual server where the majority of guest virtual machines (VMs) are in support of Windows based PC systems.

Printers: How long have we heard that printers are dead? The day that printers are dead is the day that the HP board of directors should really consider selling off that division.

RAID: Its been over twenty years since the first RAID white paper and early products appeared. Back in the 90s RAID was a popular buzzword and bandwagon topic however, people have moved on to new things. RAID has been on the Zombie technology list for several years now while it continues to find itself being deployed at the high end of the market down into consumer products. The technology continues to evolve in both hardware as well as software implementations on a local and distributed basis. Look for RAID to be on the Zombie list for at least the next couple of decades while it continues to evolve, after all, there is still room for RAID 7, RAID 8, RAID 9 not to mention moving into hexadecimal or double digit variants.

SAN: Storage Area Networks (SANs) have been declared dead and thus on the Zombie technology list before, and will be mentioned again well into the next decade. While the various technologies will continue to evolve, networking your servers to storage will also expand into different directions.

tape summit resources: Magnetic tape has been on the Zombie technology list almost as long as the IBM mainframe and it is hard to predict which one will last longer. My opinion is that tape will outlast the IBM mainframe, as it will be needed to retrieve the instructions on how to de install those Zombie monsters. Tape has seen resurgence in vendors spending some marketing resources and to no surprise, there has been an increase in coverage about it being alive, even at Google. Rest assured, tape is very safe on the Zombie technology list for another decade or more.

Windows: Similar to the PC, Microsoft Windows has been touted in the past as causing other platforms to be dead, however has been added to the Zombie list for many years now. Given that Windows is the most commonly virtualized platform or guest VM, I think we will be hearing about Windows on the Zombie list for a few decades more. There are particular versions of Windows as with any technology that have gone into maintenance or sustainment mode or even discontinued.

Poll: What are the most popular Zombie technologies?

Keep in mind that a Zombie technology is one that is still in use, being developed or enhanced, sold usually at a profit and used typically in a productive way. In some cases, a declared dead or Zombie technology may only be just in its infancy getting started having either just climbed over the peak of hype or coming out of the trough of disillusionment. In other instance, the Zombie technology has been around for a long time yet continues to be used (or abused).

Note: Zombie voting rules apply which means vote early, vote often, and of course vote for those who cannot include those that are dead (real or virtual).

Ok, nuff said, enough fun, lets get back to work, at least for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Trick or treat: Have you seen any IT Frankenstacks

Given that it is Halloween season, time for some fun.

Over the past couple of weeks various product and solution services announcements have been made that result in various articles, columns, blogs and commentary in support of them.

Ever wonder which if any of those products could actually be stitched together to work in a production environment without increasing the overall cost and complexity that they sometimes promote as their individual value proposition? Granted, many can and do work quite well when introduced into heterogeneous or existing environments with good interoperability. However what about those that look good on paper or in a webex or you tube video on their own, however may be challenged to be pieced together to work with others?

Reading product announcements

Hence in the spirit of halloween, the vision of a Frankenstack appeared.

A Frankenstack is a fictional environment where you piece various technologies from announcements or what you see or hear about in different venues into a solution.

Part of being a Frankenstack is that the various pieces may look interesting on their own, good luck trying to put them together on paper let alone in a real environment.

While I have not yet attempted to piece together any Frankenstacks lately, I can visualize various ones.

Stacking or combining different technologies, will they work together?

A Frankenstack could be based on what a vendor, VAR, or solution provider proposes or talks about.

A Frankenstack could also also be what a analyst, blogger, consultant, editor, pundit or writer pieces together in a story or recommendation.

Some Frankenstacks may be more synergistic and interoperable than others perhaps even working in a real customer environment.

Of course even if the pieces could be deployed, would you be able to afford them let alone support them (interoperability aside) without adding complexity?

You see a Frankenstack might look good on paper or on a slide deck, webex or via some other venue, however will it actually work or apply to your environment or are they just fun to talk about?

Dont get me wrong, I like hearing about new technology and products as much as anyone else, however lets have some fun with Frankenstacks and keep in perspective do they help or add complexity to your environment.

Ok, enough fun for now, let me know what you see or can put together in terms of Frankenstacks.

Keep in mind they dont actually have to work as that is what qualifies them for trick or treat and Frankenstack status.

Enjoy your Halloween season, do not be afraid, however be ready for some tricks and treats, its that time of the year.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Practical Email optimization and archiving strategies

Email is a popular tool for messaging, calendaring, and managing contacts along with attachments in most organizations.

Email and messaging

Given the popularity of email and diverse ways that it is used for managing various forms of unstructured data attachments including photos, video, audio, spreadsheets, presentations and other document objects, there are corresponding back end challenges. Those back end challenges including managing the data storage repositories (e.g. file systems and storage systems) that are used for preserving and serving email documents along with enabling regulatory or compliance mandates.

Email archiving is an important enabler for regulatory compliance and e-discovery functions. However there is another important use for E-mail archiving which as a data footprint reduction (DFR) technique and technology enables storage optimization, being green and supporting growth while stretching budgets further. There is after all no such thing as a data or information recession and all one has to do to verify the trend is to look at your own email activity.

Industry Trend: Data growth and demand

There are however constraints on time, budgets and demands to do more while relying on more information and email has become a central tool for messaging including social media networking, handling of attachments and means to manage all of that data.

DFR enables more data to be stored, retained, managed and maintenance in a cost effective manner. This includes storing more data managed per person, where when the additional data being retained adds value to an organization. Also included is keeping more data readily accessible, not necessarily instantly accessible, however but within minutes instead of hours or days depending on service requirements.

Data footprint reduction (DFR) techniques and technologies

Here is a link to a recent article that I did presenting five tips and strategies for optimizing e-mail using archiving.

Hopefully many of you will find these to be common sense tips being implemented, however if not, now is the time to take action to stretch your resources further to do more.

In general email optimization tips include:

  • Set policies for retention and disposal
  • Establish filters and rules
  • Index and organize your inbox
  • Archive messages regularly
  • Perform routine cleanup and optimization
  • Leverage cloud data protection services and solutions

When it comes to archiving projects, walk before you run, establish success to build upon for broader deployment of E-mail archiving by finding and address low hanging fruit opportunities.

Instead of trying to do to much, find opportunities that can be addressed and leveraged as examples to build business cases to move forward.

By having some success stories and proof points, these can be used to help convince management to support additional steps not to mention getting them to back your polices to achieve success.

An effective way to convince management these days is to show them how by taking additional Email archiving steps you can support increased growth demand, reduce costs while enhancing productivity not to mention adding compliance and ediscovery capabilities as side benefits.

You can read more here and here, ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Check out these top 50 IT blogs

The other day I saw something come in via the net about a top 50 IT blog list from Biztech Magazine, so being curious I clicked on the link (after making sure that it was safe).

To my surprise, I saw my blog (aka Gregs StorageIOblog) listed near the top (they sorted by blog name order) of the top 50 IT blog sites that they listed.

Must-Read IT Blog

Im honored to have been included in such an esteemed and diverse list of blogs spanning various technologies, topics and IT focus areas.

Congratulations to all that made the list as well as others blogs that you will want to add to your reading lists including those mentioned over on Calvin Zitos (aka @hpstorageguy) blog.

Check out the top 50 IT blog list here.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Cloud and Virtual Data Storage Networking book VMworld 2011 debut

Following up from a previous preview post about my new book Cloud and Virtual Data Storage Networking (CRC Press) for those for those attending VMworld 2011 in Las Vegas Monday August 29 through Thursday September 1st 2011, you can pick up your copy at the VMworld book store.

Cloud and Virtual Data Storage Networking Book

Book signing at VMworld 2011

On Tuesday August 30 at 1PM local time, I will be at the VMworld store signing books. Stop by the book store and say hello, pickup your copy of Cloud and Virtual Data Storage Networking (CRC Press). Also check out the other new releases by fellow vExpert authors during the event. I have also heard rumors that some exhibitors among others will be doing drawings, so keep an eye out in the expo hall and go visit those showing copies of my new book.

The VMworld book store hours are:

Monday 8:30am to 7:30pm
Tuesday 8:30am to 6:00pm
Wednesday 8:30am to 8:00pm
Thursday 8:00am to 2:00pm

For those not attending VMworld 2011, you can order your copy from different venues including Amazon.com, Barnes and Noble, DigitalGuru and CRC Press among others.

Learn more about Cloud and Virtual Data Storage Networking (CRC Press) at https://storageioblog.com/book3

Look forward to seeing you at the various VMworld events in Las Vegas as well as at other upcoming venues.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

2011 Summer momentus hybrid hard disk drive (HHDD) moment

This is the fourth in a series of posts (others are here, here and here) that I have been doing for over a year now taking a moment now and then to share some of my experiences with using hybrid hard disk drives (HHDD) along side my hard disk drives (HDD) and solid state drives (SSD).

It has been  a several months now since applying the latest firmware (SD25) which resulted in even better stability that was further enhanced when upgrading a few months ago to Windows 7 on all systems with the Seagate Momentus XT HHDD installed in them. One additional older system was recently upgraded from a slower, lower capacity 3.5 inch form factor SATA HDD to a physically smaller 2.5 inch HHDD. The net result is that system now boots in a fraction of the time, shuts down faster, work on it is much more productive and capacity was increased by three and half times.

Why use an HHDD when you could get an SSD?

With flash SSD devices continuing to become more affordable for a given price capacity point, why did I not simply install some of those devices instead of using the HHDDs?

With the money saved from buying the 500GB Momentus XT on Amazon.com (under $100 USD) vs. buying a smaller capacity SSD, I was also able to double the amount of DRAM in that system furthering its useful life plus buying some time to decide what to replace it with while having extra funds for other projects.

Sure I would like to have more and larger capacity SSDs to go along with those I already have, however there is balancing budget with needs and improving productivity (needs vs. wants).

To expand more on why the HHDD at this time vs. SSD, want some more SSD devices to coexist with those I already have and use for different functions. Looking to stretch my budget further, the HHDDs are a great balance of being almost and in some cases as fast as SSDs while at the cost of a high capacity HDD. In other words Im getting the best of both worlds which is a 7,200 RPM 2.5 inch 500GB HDD (e.g. for space capacity) that has 4GB of single layer cell (SLC) flash (e.g. SSD) and 32MB of DRAM as buffers (for read and write performance) to help speed up read and write operations.

Given for what Im using them for, I do not need the consistent higher performance of an SSD across all of my data which brings up the other benefit, Im able to retain more data on the device as a buffer or cache instead of having to go to a NAS or other storage repository to get it. Even though the amount of data being stored on the HHDD is increasing, not all of it gets backed up locally or to my cloud provider as there is already a copy(s) elsewhere. Instead, a small subset of data that is changing or very important gets routinely protected locally and remotely to the cloud enabling easier and faster restores when needed. Now if you have a large budget or someone is willing to buy or give you one, sure, go ahead and get one of the high capacity SSDs (preferably SLC based if concerned about endurance) however there are some good MLC ones out there as well.

Step back a bit, what is an HHDD?

Hybrid hard disk drives (HHDDs) such as the Seagate Momentus XT are, as their name implies, a combination of large- to medium-capacity HDDs with FLASH SSDs. The result is a mix of performance and capacity in a cost effective footprint. HHDDs have not seen much penetration in the enterprise space and may not see much more, given how many vendors are investing in the firmware and associated software technology to achieve hybrid results using a mix of SSDs and high capacity disk drives along with the lack of awarness that they exist.

Where HHDDs could have some additional traction is in secondary or near-line solutions that need some performance enhancements while having a large amount of capacity in a cost-effective footprint. For now, HHDDs are appearing mainly in desktops, laptops, and workstations that need lots of capacity with some performance but without the high price of SSDs. Before I installed the HHDDs in my laptops, I initially used one as a backup and data movement device, and I found that large, gigabyte-sized files could be transferred as fast as with SSDs and much faster than via my WiFi based network and NAS. The easiest way to characterize where HHDDs fit is where you want an SSD for performance, but your applications do not always need speed and you need a large amount of storage capacity at an affordable price.

SSDs are part of the future, however HDDs have a lot of life in them including increased capacities, both are best used where their strengths can be maximized, thus HHDDs are a great compliment or stepping stone for some applications. Note, Seagate recently announced that they have shipped over one million HHDDs in just over a years time.

I do find it interesting though when I hear from those who claim that the HDD is dead and that SSD is the future yet they do not have SSDs in their systems let alone do they have or talk about HHDDs, hmmmm.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Supporting IT growth demand during economic uncertain times

Doing more with less, doing more with what you have or reducing cost have been the mantra for the past several years now.

Does that mean as a trend, they are being adopted as the new way of doing business, or simply a cycle or temporary situation?

Reality is that many if not most IT organizations are and will remain under pressure to stretch their budgets further for the immediate future. Over the past year or two some organizations saw increases in their budgets however also increased demand while others saw budgets fixed or reduced while having to support growth. On the other hand, there is no such thing as an information recession with more data being generated, moved, processed, stored and retained for longer periods of time.

Industry trend: No such thing as a data recession

Something has to give as shown in the following figure which is that on one curve there is continued demand and growth, while another curve shows need to reduce costs while another reflects the importance of maintaining or enhancing service level objectives (SLOs) and quality of service (QoS).

Enable growth while removing complexity and cost without compromising service levels

One way to reduce costs is to inhibit growth while another is to support growth by sacrificing QoS including performance, response time or availability as a result of over consolidation, excessive utilization or instability as a result of stretching resources to far. Where innovation comes into play is finding and fixing problems vs. moving or masking them or treating symptoms vs. the real issue and challenge. Innovation also comes into play by identifying both near term tactical as well as longer term strategic means of taking complexity and cost out of service delivery and the resources needed to support them. For example determining the different resources and processes involved in delivering an email box of a given size and reliability. Another being supporting a virtual machine (VM) with a given performance and capacity capability. Yet another scenario is a file share or home directory of a specific size and availability. By streamlining work flows, leveraging automation and other tools to enforce polices as well as adopting new best practices complexity and thereby costs can be reduced. The net rest is a lower cost to provide a given service to a specific level which when multiplied out over many users or instances, results in cost savings however also productivity gains.

The above is all good and well for longer term strategic and where you want to go or get to, however what can be done right now today?

Here are a few tips to do more with what you have while supporting growth demands

If you have service level agreements (SLAs) and SLOs as part of your service category, review with your users as to what they need vs. what they would like to have. What you may find is that your users want or expect a given level of service, yet would be happy and ok with moving to a cloud service that had lower SLO and SLA expectations if lower cost. The previous scenario would be an indicator that you users want and thus you give them a higher level of service, yet their requirements are actually lower than what is expected. On the other hand if you do not have SLOs and SLAs aligned with cost for the services then set them up and review customer or client expectations, needs vs. wants on a regular basis. You might find out that you can stretch your budget by delivering a lower (or higher) class of services to meet different users requirements than what was assumed to be the case. In the case of supporting a better class of service, if you can use an SSD enabled solution to reduce latency or wait times and boost productivity, more transactions or page views or revenue per hour, that could prompt a client to request that capability to meet their business needs.

Reduce your data footprint impact in order to support growth using the ABCDs of data footprint reduction (DFR), that is Archive (email, file, database), Backup modernization, Compression and consolidation, Data management and dedupe, storage tiering among other techniques.

Storage, server virtualization and optimization using capacity consolidation where practical and IO consolidation to fast storage and SSD where possible. Also review storage configuration including RAID and allocation to identity if any relatively easy changes can improve performance, availability, capacity and energy impact.

Investigate available upgrades and enhancements to your existing hardware, software and services that can be applied to provide breathing room within current budgets while evaluating new technologies.

Find and fix problems vs. chasing false positives that provide near term relief only to have the real issue reappear. Maximize your budgets by identifying where people time and other resources are being spent due to processes, work flows, technology configuration complexity or bottlenecks and address those.

Enhance and leverage existing management measurements to gain more insight along with implementing new metrics for End to End (E2E) situational awareness of your environment which will enable effective decision making. For example you may be told to move some function to the cloud as it will be cheaper, yet if you do not have metrics to indicate one way or the other, how can that be an informed decision? If you have metrics that show your cost for the same service being moved to a cloud or managed service provider as well as QoS, SLO, SLA, RTO, RPO and other TLAs, then you can make informed decisions. That decision may still be to move functions to a cloud or other service even if in fact it is more expensive compared to what you can provide it for in order that your resources can be directed to supporting other important internal functions.

Look for ways to reduce cost of a service delivered as opposed to simply cutting costs. They sound like one and the same, however if you have metrics and measurements providing situational awareness to know what the cost of a service is, you can also then look at how to streamline those services, remove complexity, reduce workflow, leverage automation there by removing cost. The goal is the same, however how you go about removing cost can have an impact on your return on innovation not to mention customer satisfaction.

Also be an informed shopper, have a forecast or plan on what you will need and when, along with what you must have (core requirements) vs. what you would like to have or want. When looking at options, balance what is needed and then if you can get what you want or would like for little or no extra cost if they add value or enable other initiatives. Part of being an informed shopper is having support of the business to be able to procure what you want or need which means aligning technology resources and their cost to delivery of business functions and services.

What you need vs. what you want
In a recent interview with the associated press (AP) the reporter wanted to know my comments about spending vs. saving during economic tough times (you can read the story here). Basically my comments were to spend within your means by identifying what you need vs. what you want, what is required to keep the business running or improve productivity and remove cost as opposed to acquiring nice to have things that can wait. Sure I would like to have a new 85 to 120" 3D monitor for my workstation that could double as a TV, however I do not need or require it.

On the other hand, I recently upgraded an existing workstation adding a Hybrid Hard Disk Drive (HHDD) and some additional memory, about a $200USD investment that is already paying for itself via increased productivity. That is instead of enjoying a cup of dunkin donut coffee while waiting for some tasks to complete on that system, Im able to get more done in a given amount of time boosting productivity.

For IT environments this means looking at expenditures to determine what is needed or required to keep things running while supporting near term strategic and tactical initiatives or pet projects.

For vendors and vars, if things have not been a challenge yet, now they will need to refine their messages to show more value, return on innovation (ROI) in terms of how to help their customers or prospects stretch resources (budgets, people, skill sets, products, services, licenses, power and cooling, floor space) further to support growth, while removing costs without compromising on service delivery. This also means a shift in thinking of short term or tactical cost cutting to longer term strategic approaches of reducing costs to deliver a service or resources.

Related links pertaining to stretching your resources, doing more with what you have, increasing productivity and maximizing your budget to support growth without compromising on customer service.

Saving Money with Green IT: Time To Invest In Information Factories
Storage Efficiency and Optimization – The Other Green
Shifting from energy avoidance to energy efficiency
Saving Money with Green Data Storage Technology
Green IT Confusion Continues, Opportunities Missed!
Storage Efficiency and Optimization – The Other Green
PUE, Are you Managing Power, Energy or Productivity?
Cloud and Virtual Data Storage Networking
Is There a Data and I/O Activity Recession?
More Data Footprint Reduction (DFR) Material

What is your take?

Are you and your company going into a spending freeze mode, or are you still spending, however placing or having constraints put on discretionary spending?

How are you stretching your IT budget to go further?

 

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Measuring Windows performance impact for VDI planning

Here is a link to a recent guest post that I was invited to do over at The Virtualization Practice (TVP) pertaining to measuring the impact of Windows Boot performance and what that means for planning for Virtual Desktop Infrastructure (VDI) initiatives.

With Virtual Desktop Infrastructures (VDI) initiatives adoption being a popular theme associated with cloud and dynamic infrastructure environments a related discussion point is the impact on networks, servers and storage during boot or startup activity to avoid bottlenecks. VDI solution vendors include Citrix, Microsoft and VMware along with various server, storage, networking and management tools vendors.

A common storage and network related topic involving VDI are boot storms when many workstations or desktops all startup at the same time. However any discussion around VDI and its impact on networks, servers and storage should also be expanded from read centric boots to write intensive shutdown or maintenance activity as well.

Having an understanding of what your performance requirements are is important to adequately design a configuration that will meet your Quality of Service (QoS) and service level objectives (SLOs) for VDI deployment in addition to knowing what to look for in candidate server, storage and networking technologies. For example, knowing how your different desktop applications and workloads perform on a normal basis provides a baseline to compare with during busy periods or times of trouble. Another benefit is that when shopping for example storage systems and reviewing various benchmarks, knowing what your actual performance and application characteristics are helps to align the applicable technology to your QoS and SLO needs while avoiding apples to oranges benchmark comparisons.

Check out the entire piece including some test results using the hIOmon tool from hyperIO to gather actual workstation performance numbers.

Keep in mind that the best benchmark is your actual applications running as close to possible to their typical workload and usage scenarios.

Also keep in mind that fast workstations need fast networks, fast servers and fast storage.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Full RSS archive feeds are now available for StorageIOblog

To speed up access to the StorageIO and StorageIOblog site RSS full and RSS summary feeds, older posts have been moved to a new archive RSS feed. Theese changes are only to the RSS full and summary feed files, no changes have been made to the StorageIOblog site.

View or access the full StorageIO RSS feed (httP://storageioblog.com/RSSfullArchive.xml) here.

Enjoy the faster access RSS full and summary feed, plus archived feeds. Ok, nuf said for now.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved