Putting some VMware ESX storage tips together: (Part II)

In the first part of this post I showed how to use a tip from Dunacn Epping to fake VMware into thinking that a HHDD (Hybrid Hard Disk Drive) was a SSD.

Now lets look at using a tip from Dave Warburton to make an internal SATA HDD into an RDM for one of my Windows-based VMs.

My challenge was that I have a VM with a guest that I wanted to have a Raw Device Mapping (RDM) internal SATA HDD accessible to it, expect the device was an internal SATA device. Given that using the standard tools and reading some of the material available, it would have been easy to give up and quit since the SATA device was not attached to an FC or iSCSI SAN (such as my Iomega IX4 I bought from Amazon.com).

Image of internal RDM with vMware
Image of internal SATA drive being added as a RDM with vClient

Thanks to Dave’s great post that I found, I was able to create a RDM of an internal SATA drive, present it to the existing VM running Windows 7 ultimate and it is now happy, as am I.

Pay close attention to make sure that you get the correct device name for the steps in Dave’s post (link is here).

For the device that I wanted to use, the device name was:

From the ESX command line I found the device I wanted to use which is:

t10.ATA_____ST1500LM0032D9YH148_____Z110S6M5

Then I used the following ESX shell command per Dave’s tip to create an RDM of an internal SATA HDD:

vmkfstools -z /vmfs/devices/disks/ t10.ATA_____ST1500LM0032D9YH148_____Z110S6M5
 /vmfs/volumes/dat1/rdm_ST1500L.vmdk

Then the next steps were to update an existing VM using vSphere client to use the newly created RDM.

Hint, Pay very close attention to your device naming, along with what you name the RDM and where you find it. Also, recommend trying or practicing on a spare or scratch device first, if something is messed up. I practiced on a HDD used for moving files around and after doing the steps in Dave’s post, added the RDM to an existing VM, started the VM and accessed the HDD to verify all was fine (it was). After shutting down the VM, I removed the RDM from it as well as from ESX, and then created the real RDM.

As per Dave’s tip, vSphere Client did not recognize the RDM per say, however telling it to look at existing virtual disks, select browse the data stores, and low and behold, the RDM I was looking for was there. The following shows an example of using vSphere to add the new RDM to one of my existing VMs.

In case you are wondering, why I want to make a non SAN HDD as a RDM vs. doing something else? Simple, the HDD in question is a 1.5TB HDD that has backups on that I want to use as is. The HDD is also bit locker protected and I want the flexibility to remove the device if I have to being accessible via a non-VM based Windows system.


Image of my VMware server with internal RDM and other items

Could I have had accomplished the same thing using a USB attached device accessible to the VM?

Yes, and in fact that is how I do periodic updates to removable media (HDD using Seagate Goflex drives) where I am not as concerned about performance.

While I back up off-site to Rackspace and AWS clouds, I also have a local disk based backup, along with creating periodic full Gold or master off-site copies. The off-site copies are made to removable Seagate Goflex SATA drives using a USB to SATA Goflex cable. I also have the Goflex eSATA to SATA cable that comes in handy to quickly attach a SATA device to anything with an eSATA port including my Lenovo X1.

As a precaution, I used a different HDD that contained data I was not concerned about if something went wrong to test to the process before doing it with the drive containing backup data. Also as a precaution, the data on the backup drive is also backed up to removable media and to my cloud provider.

Thanks again to both Dave and Duncan for their great tips; I hope that you find these and other material on their sites as useful as I do.

Meanwhile, time to get some other things done, as well as continue looking for and finding good work a rounds and tricks to use in my various projects, drop me a note if you see something interesting.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Thanks for viewing StorageIO content and top 2012 viewed posts

StorageIO industry trends cloud, virtualization and big data

2012 was a busy year (it was our 7th year in business) along with plenty of activity on StorageIOblog.com as well as on the various syndicate and other sites that pickup our content feed (https://storageioblog.com/RSSfull.xml).

Excluding traditional media venues, columns, articles, web casts and web site visits (StorageIO.com and StorageIO.TV), StorageIO generated content including posts and pod casts have reached over 50,000 views per month (and growing) across StorageIOblog.com and our partner or syndicated sites. Including both public and private, there were about four dozen in-person events and activities not counting attending conferences or vendor briefing sessions, along with plenty of industry commentary. On the twitter front, plenty of activity there as well closing in on 7,000 followers.

Thank you to everyone who have visited the sites where you will find StorageIO generated content, along with industry trends and perspective comments, articles, tips, webinars, live in person events and other activities.

In terms of what was popular on the StorageIOblog.com site, here are the top 20 viewed posts in alphabetical order.

Amazon cloud storage options enhanced with Glacier
Announcing SAS SANs for Dummies book, LSI edition
Are large storage arrays dead at the hands of SSD?
AWS (Amazon) storage gateway, first, second and third impressions
EMC VFCache respinning SSD and intelligent caching
Hard product vs. soft product
How much SSD do you need vs. want?
Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go
Is SSD dead? No, however some vendors might be
IT and storage economics 101, supply and demand
More storage and IO metrics that matter
NAD recommends Oracle discontinue certain Exadata performance claims
New Seagate Momentus XT Hybrid drive (SSD and HDD)
PureSystems, something old, something new, something from big blue
Researchers and marketers dont agree on future of nand flash SSD
Should Everything Be Virtualized?
SSD, flash and DRAM, DejaVu or something new?
What is the best kind of IO? The one you do not have to do
Why FC and FCoE vendors get beat up over bandwidth?
Why SSD based arrays and storage appliances can be a good idea

Moving beyond the top twenty read posts on StorageIOblog.com site, the list quickly expands to include more popular posts around clouds, virtualization and data protection modernization (backup/restore, HA, BC, DR, archiving), general IT/ICT industry trends and related themes.

I would like to thank the current StorageIOblog.com site sponsors Solarwinds (management tools including response time monitoring for physical and virtual servers) and Veeam (VMware and Hyper-V virtual server backup and data protection management tools) for their support.

Thanks again to everyone for reading and following these and other posts as well as for your continued support, watch for more content on the above and other related and new topics or themes throughout 2013.

Btw, if you are into Facebook, you can give StorageIO a like at facebook.com/storageio (thanks in advance) along with viewing our newsletter here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Many faces of storage hypervisor, virtual storage or storage virtualization

StorageIO industry trends cloud, virtualization and big data

Storage hypervisors were a 2012 popular buzzword bingo topic with plenty of industry adoption and some customer deployment. Separating the hype around storage hypervisors reveals conversations around backup, restore, BC, DR and archiving.

backup, restore, BC, DR and archiving
Cloud and virtualization components

Storage virtualization along with virtual storage and storage hypervisors have a theme of abstracting underlying physical hardware resources like server virtualization. The abstraction can be for consolidation and aggregation, or for enabling agility, flexibility, emulation and other functionality.

backup, restore, BC, DR and archiving

Storage virtualization can be implemented in different locations, in many ways with various functionality and focus. For example the abstraction can occur on a server, in an virtual or physical appliance (e.g. tin wrapped software), in a network switch or router, as well as in a storage system. The focus can be for aggregation, or data protection (HA, BC, DR, backup, replication, snapshot) on a homogeneous (all one vendor) or mixed vendor basis (heterogeneous).

backup, restore, BC, DR and archiving

Here is a link to a guest post that I recently did over at The Virtualization Practice looking at storage hypervisors, virtual storage and storage virtualization. As is the case with virtual storage, storage virtualization, storage for virtual environments, depending on your views, spheres of influence, preferences among other factors what you call a storage hypervisor will probably vary.

Additional related material:

  • Are you using or considering implementation of a storage hypervisor?
  • Cloud, virtualization, storage and networking in an election year
  • EMC VPLEX: Virtual Storage Redefined or Respun?
  • Server and Storage Virtualization – Life beyond Consolidation
  • Should Everything Be Virtualized?
  • How many degrees separate you and your information?
  • Cloud and Virtual Data Storage Networking (CRC)
  • The Green and Virtual Data Center (CRC)
  • Resilient Storage Networks (Elsevier)
  • backup, restore, BC, DR and archiving
  • Btw, as a special offer for viewers, I have some copies of Resilient Storage Networking: Designing Flexible Scalable Data Infrastructures (Elsevier) available for $19.95, shipping and handling included. Send me an email or tweet (@storageio) to learn more and get your copy (Major credit cards and Pay pal accepted).

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud conversation, Thanks Gartner for saying what has been said

    StorageIO industry trends cloud, virtualization and big data

    Thank you Gartner for your statements concurring and endorsing the notion of clouds can be viable, however do your homework, welcome to the club.

    Why am I thanking Gartner?

    Simple, I appreciate Gartner now saying what has been said for a couple of years hoping it will help to amplify the theme to the Gartner followers and faithful.

    Gartner: Cloud storage viable option, but proceed carefully


    Images licensed for use by StorageIO via Atomazul / Shutterstock.com

    Sounds like Gartner has come to the same conclusion on what has been said for several years now in posts, articles, keynotes, presentations, webinars and other venues which is when it comes to IT clouds, don’t be scared. However do your homework, be prepared, do your due diligence, proof of concepts.

    Image of clouds, cloud and virtual data storage networking book

    Here are some related materials to prepare and plan for IT clouds (public and private):

    What is your take on IT clouds? Click here to cast your vote and see what others are thinking about clouds.

    Now for those who feel that free information or content is not worth its price, then feel free to go to Amazon and buy some Book copies here, or subscribing to the Kindle version of the StorageIOblog, or contact us for an advisory consultation or other project. For everybody else, enjoy and remember, don’t be scared of clouds, do your homework, be prepared and keep in mind that clouds are a shared responsibility.

    Disclosure: I was a Gartner client when I working in an IT organization and then later as a vendor, however not anymore ;).

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Congratulations Imation and Nexsan, are there any independent storage vendors left?

    StorageIO industry trends cloud, virtualization and big data

    Last week Imation, the company that is known for making CDs, DVDs, magnetic tape and in the past floppy disk (diskettes) bought Nexsan, a company known for the SATA and SAS storage products.

    Imation is also (or should be) owns the TDK and Memorex names (remember is it real or is it Memorex? If not Google it). They also have had for several years removable hard disk drive (RHDD) products including the Odyssey (I am in the process of retiring mine), as well as partnership with the former ProStor for RDX and having acquired some of the assets of ProStor namely their RDX based InifiVault storage appliance. Imation has also been involved in some other things including USB and other forms of flash-based solid state devices (SSD), as well as a couple of years (2007) they launched cloud backup with DataGuard before cloud backup had become a popular buzzword topic.

    Imation has also divested parts of its business over past several years including some medical related (X-ray stuff) to Kodak who occupies part of the headquarter building in Oakdale MN, or at least last time I looked when driving by there on way from the airport. They also divested their SAN lab with some of the staff going to Glasshouse and other pieces going to Lion bridge (an independent test lab company). Beyond traditional of data protection, backup/restore and archiving media or mediums from consumer to large-scale enterprise, Imation has also been involved in other areas involving recording. Imation also has done some other recent acquisitions around dedupe (Nine Technologies).

    For its part, Nexsan has extended their portfolio from SATA and SAS products, AutoMaid Intelligent Power Management (IPM) which gives benefits of variable power and performance without the penalties of first generation MAID type products. Read more about IPM and related themes here, here and here. Nexsan also supports NAS and iSCSI solutions in addition to their archive and content or object storage focused Assureon product they bought a few years ago.

    This is a good acquisition for both companies as it gives Imation a new set of products to sell into their existing accounts and channels. It also can leverage Nexsan’s channel and solution selling skills giving them (Nexsan) a bigger brand and large parent for credibility (not that they did not have that in the past).

    Here are is a link to a piece done by Dave Raffo that includes some comments and perspectives from me. To say that the synergy here is about archiving or selling SSD or storage would be too easy and miss a bigger potential. That potential is Imation has been in the business of selling consumable accessories for protecting and preserving data. Notice I said consumable accessories which in the past has meant manufacturing consumable media (e.g. Floppy disks or discs, CD, DVDs, magnetic tapes) as well as partnering around flash and HDDs.

    In many environments from small to large to super-sized cloud and service providers, some types of storage systems including some of those that Nexsan sells can be considered a consumable media or medium taking over the role that tape, CDs or DVDs have been used in the pat. Instead of using tape or CDs or DVDs to protect the HDDs and SSDs based data, HDD based solutions are being used for disk-to-disk (D2D) protection (part of modernizing data protection). D2D is being done as appliances, or in conjunction with cloud and object storage system software stacks such as OpenStack swift, Basho Riak CS, CloudStack, Cleversafe, Ceph, Caringo and a list of others, in addition to appliances such as EMC ATMOS among others than can support 3rd party storage device as consumable mediums. Keep in mind that there is no such thing as a data or information recession, and people and data are living longer and getting larger, both for big data and little data.

    The big if in this acquisition which IMHO is a fair price for both parties based on realistic valuations is if they can collective execute on it. This means that Imation and Nexsan need to leverage each other’s strengths, address any weakness, close gaps and expand into each other’s markets, channels and sell the entire portfolio as opposed to becoming singular focused on a particular area tool or technology. If Imation can execute on this and Nexsan leverages their new parent, the result should be moving from the roughly $85M USD sales to $100M+ then $125M then $150M and so forth over the next couple of years.

    Even if Imation keeps maintains revenues or a slight increase, which would also be a good deal for them, granted the industry pundits may not agree, so let us see where this is in a few years. However if Imation can grow the Nexsan business, then it would become a very good deal. Thus, IMHO the price valuation for the deal has the risk built into, something like when NetApp bought the Engenio business unit from LSI back in 2011 for about $480M USD. At that time, Engenio was doing about $705M USD in revenue and seen by many industry pundits as being on the decline, thus a lower valuation. For its part, NetApp, has been executing maintaining the revenue of that business unit with some expansion, thus their execution so far is being rewarding for taking the risk.

    Let us see if Imation can do the same thing.

    Now, does that mean that Nexsan was the last of the independent storage vendors left?

    Hardly, after all there is still Xiotech, excuse me, Xio as they changed their name as part of a repackaging, relaunch and downsizing. There is DotHill who supplies partners such HP, or Dothills former partner supplier InfoTrend. If you are an Apple fan then you might know about Promise, if not, you should. Lets not forget about Data Direct Networks (DDN) that is still independent and at around $200M (give or take several million) in revenue, are very much still around.

    How about Xyratex, sure they make the enclosures and appliances that many others use in their solutions, however they also have a storage solutions business focused on scale out, clustered and grid NAS based on Lustre. There are some others that I am drawing a blank on now (if you read this and are one of them, chime in) in addition to all the new or current generation of startups (you can chime in as well to let people know who you are to be bought).

    There is still consolidation taking place, both of smaller vendors by mid-sized vendors, mid-sized vendors by big vendors, big vendors by mega vendors, and startups by established.

    Again congratulations to both Imation and Nexsan, let us see who or what is next on the 2013 mergers and acquisition list, as well as who will join the where are they now club.

    Disclosure: Nexsan has been a StorageIO client in the past; however, Imation has not been a client, although they have bought me lunch before here in the Stillwater, MN area.

    With Imation having their own brand name and identity, not to mention TDK and Memorex, now I have to wonder will Nexsan be real or Memorex or something else? ;)

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)

    StorageIO industry trends cloud, virtualization and big data

    This is the second in a two-part industry trends and perspective looking at learning from cloud incidents, view part I here.

    There is good information, insight and lessons to be learned from cloud outages and other incidents.

    Sorry cynics no that does not mean an end to clouds, as they are here to stay. However when and where to use them, along with what best practices, how to be ready and configure for use are part of the discussion. This means that clouds may not be for everybody or all applications, or at least today. For those who are into clouds for the long haul (either all in or partially) including current skeptics, there are many lessons to be  learned and leveraged.

    In order to gain confidence in clouds, some questions that I routinely am asked include are clouds more or less reliable than what you are doing? Depends on what you are doing, and how you will be using the cloud services. If you are applying HA and other BC or resiliency best practices, you may be able to configure and isolate from the more common situations. On the other hand, if you are simply using the cloud services as a low-cost alternative selecting the lowest price and service class (SLAs and SLOs), you might get what you paid for. Thus, clouds are a shared responsibility, the service provider has things they need to do, and the user or person designing how the service will be used have some decisions making responsibilities.

    Keep in mind that high availability (HA), resiliency, business continuance (BC) along with disaster recovery (DR) are the sum of several pieces. This includes people, best practices, processes including change management, good design eliminating points of failure and isolating or containing faults, along with how the components  or technology used (e.g. hardware, software, networks, services, tools). Good technology used in goods ways can be part of a highly resilient flexible and scalable data infrastructure. Good technology used in the wrong ways may not leverage the solutions to their full potential.

    While it is easy to focus on the physical technologies (servers, storage, networks, software, facilities), many of the cloud services incidents or outages have involved people, process and best practices so those need to be considered.

    These incidents or outages bring awareness, a level set, that this is still early in the cloud evolution lifecycle and to move beyond seeing clouds as just a way to cut cost, and seeing the importance and value HA, resiliency, BC and DR. This means learning from mistakes, taking action to correct or fix errors, find and cut points of failure are part of a technology maturing or the use of it. These all tie into having services with service level agreements (SLAs) with service level objectives (SLOs) for availability, reliability, durability, accessibility, performance and security among others to protect against mayhem or other things that can and do happen.

    Images licensed for use by StorageIO via
    Atomazul / Shutterstock.com

    The reason I mentioned earlier that AWS had another incident is that like their peers or competitors who have incidents in the past, AWS appears to be going through some growing, maturing, evolution related activities. During summer 2012 there was an AWS incident that affected Netflix (read more here: AWS and the Netflix Fix?). It should also be noted that there were earlier AWS outages where Netflix (read about Netflix architecture here) leveraged resiliency designs to try and prevent mayhem when others were impacted.

    Is AWS a lightning rod for things to happen, a point of attraction for Mayhem and others?

    Granted given their size, scope of services and how being used on a global basis AWS is blazing new territory and experiences, similar to what other information services delivery platforms did in the past. What I mean is that while taken for granted today, open systems Unix, Linux, Windows-based along with client-server, midrange or distributed systems, not to mention mainframe hardware, software, networks, processes, procedures, best practices all went through growing pains.

    There are a couple of interesting threads going on over in various LinkedIn Groups based on some reporters stories including on speculation of what happened, followed with some good discussions of what actually happened and how to prevent recurrence of them in the future.

    Over in the Cloud Computing, SaaS & Virtualization group forum, this thread is based on a Forbes article (Amazon AWS Takes Down Netflix on Christmas Eve) and involves conversations about SLAs, best practices, HA and related themes. Have a look at the story the thread is based on and some of the assertions being made, and ensuing discussions.

    Also over at LinkedIn, in the Cloud Hosting & Service Providers group forum, this thread is based on a story titled Why Netflix’ Christmas Eve Crash Was Its Own Fault with a good discussion on clouds, HA, BC, DR, resiliency and related themes.

    Over at the Virtualization Practice, there is a piece titled Is Amazon Ruining Public Cloud Computing? with comments from me and Adrian Cockcroft (@Adrianco) a Netflix Architect (you can read his blog here). You can also view some presentations about the Netflix architecture here.

    What this all means

    Saying you get what you pay for would be too easy and perhaps not applicable.

    There are good services free, or low-cost, just like good free content and other things, however vice versa, just because something costs more, does not make it better.

    Otoh, there are services that charge a premium however may have no better if not worse reliability, same with content for fee or perceived value that is no better than what you get free.

    Additional related material

    Some closing thoughts:

    • Clouds are real and can be used safely; however, they are a shared responsibility.
    • Only you can prevent cloud data loss, which means do your homework, be ready.
    • If something can go wrong, it probably will, particularly if humans are involved.
    • Prepare for the unexpected and clarify assumptions vs. realities of service capabilities.
    • Leverage fault isolation and containment to prevent rolling or spreading disasters.
    • Look at cloud services beyond lowest cost or for cost avoidance.
    • What is your organizations culture for learning from mistakes vs. fixing blame?
    • Ask yourself if you, your applications and organization are ready for clouds.
    • Ask your cloud providers if they are ready for you and your applications.
    • Identify what your cloud concerns are to decide what can be done about them.
    • Do a proof of concept to decide what types of clouds and services are best for you.

    Do not be scared of clouds, however be ready, do your homework, learn from the mistakes, misfortune and errors of others. Establish and leverage known best practices while creating new ones. Look at the past for guidance to the future, however avoid clinging to, and bringing the baggage of the past to the future. Use new technologies, tools and techniques in new ways vs. using them in old ways.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud conversations: Gaining cloud confidence from insights into AWS outages

    StorageIO industry trends cloud, virtualization and big data

    This is the first of a two-part industry trends and perspectives series looking at how to learn from cloud outages (read part II here).

    In case you missed it, there were some public cloud outages during the recent Christmas 2012-holiday season. One incident involved Microsoft Xbox (view the Microsoft Azure status dashboard here) users were impacted, and the other was another Amazon Web Services (AWS) incident. Microsoft and AWS are not alone, most if not all cloud services have had some type of incident and have gone on to improve from those outages. Google has had issues with different applications and services including some in December 2012 along with a Gmail incident that received covered back in 2011.

    For those interested, here is a link to the AWS status dashboard and a link to the AWS December 24 2012 incident postmortem. In the case of the recent AWS incident which affected users such as Netflix, the incident (read the AWS postmortem and Netflix postmortem) was tied to a human error. This is not to say AWS has more outages or incidents vs. others including Microsoft, it just seems that we hear more about AWS when things happen compared to others. That could be due to AWS size and arguably market leading status, diversity of services and scale at which some of their clients are using them.

    Btw, if you were not aware, Microsoft Azure is more than just about supporting SQLserver, Exchange, SharePoint or Office, it is also an IaaS layer for running virtual machines such as Hyper-V, as well as a storage target for storing data. You can use Microsoft Azure storage services as a target for backing up or archiving or as general storage, similar to using AWS S3 or Rackspace Cloud files or other services. Some backup and archiving AaaS and SaaS providers including Evault partner with Microsoft Azure as a storage repository target.

    When reading some of the coverage of these recent cloud incidents, I am not sure if I am more amazed by some of the marketing cloud washing, or the cloud bashing and uniformed reporting or lack of research and insight. Then again, if someone repeats a myth often enough for others to hear and repeat, as it gets amplified, the myth may assume status of reality. After all, you may know the expression that if it is on the internet then it must be true?

    Images licensed for use by StorageIO via
    Atomazul / Shutterstock.com

    Have AWS and public cloud services become a lightning rod for when things go wrong?

    Here is some coverage of various cloud incidents:

    The above are a small sampling of different stories, articles, columns, blogs, perspectives about cloud services outages or other incidents. Assuming the services are available, you can Google or Bing many others along with reading postmortems to gain insight into what happened, the cause, effect and how to prevent in the future.

    Do these recent incidents show a trend of increased cloud outages? Alternatively, do they say that the cloud services are being used more and on a larger basis, thus the impacts become more known?

    Perhaps it is a mix of the above, and like when a magnetic storage tape gets lost or stolen, it makes for good news or copy, something to write about. Granted there are fewer tapes actually lost than in the past, and far fewer vs. lost or stolen laptops and other devices with data on them. There are probably other reasons such as the lightning rod effect given how much industry hype around clouds that when something does happen, the cynics or foes come out in force, sometimes with FUD.

    Similar to traditional hardware or software based product vendors, some service providers have even tried to convince me that they have never had an incident, lost or corrupted or compromised any data, yeah, right. Candidly, I put more credibility and confidence in a vendor or solution provider who tells me that they have had incidents and taken steps to prevent them from recurring. Granted those steps might be made public while others might be under NDA, at least they are learning and implementing improvements.

    As part of gaining insights, here are some links to AWS, Google, Microsoft Azure and other service status dashboards where you can view current and past situations.

    What is your take on IT clouds? Click here to cast your vote and see what others are thinking about clouds.

    Ok, nuff said for now (check out part II here )

    Disclosure: I am a customer of AWS for EC2, EBS, S3 and Glacier as well as a customer of Bluehost for hosting and Rackspace for backups. Other than Amazon being a seller of my books (and my blog via Kindle) along with running ads on my sites and being an Amazon Associates member (Google also has ads), none of those mentioned are or have been StorageIO clients.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    December 2012 StorageIO Update news letter

    StorageIO News Letter Image
    December 2012 News letter

    Welcome to the December 2012 year end edition of the StorageIO Update news letter including a new format and added content.

    You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

    Click on the following links to view the December 2012 edition as brief (short HTML sent via Email) version, or the full HTML or PDF versions.

    Visit the news letter page to view previous editions of the StorageIO Update.

    You can subscribe to the news letter by clicking here.

    Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

    Nuff said for now

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Predictions, did Mayans have it right, or did we read it wrong?

    StorageIO industry trends cloud, virtualization and big data

    It is late in the day December 12, 2012 and best I can tell, we are still here, and for some, by time you read this it will be a few days or weeks later which means that either the Mayan calendar had it wrong, or we misinterpret it. Some would say that December 12, 2012 is not the important date, that it is really December 21, 2012 that the world will end, ok, lets wait and see what happens in a few more days.

    However taking a step back from the Mayan calendar it dawned on me that some predictions such today’s Mayan calendar forecast is similar to others that happen around this time of the year. That is the annual information technology or IT related predictions made by pundits or anybody else with an opinion, most of which in theory their concepts are not even close. Granted many predictions make good press and media things to read or listen to for entertainment. In some cases, these predictions are variations of what we’re predicted last year in 2011 and the year before in 2010 and they year before that and so forth.

    StorageIO industry trends cloud, virtualization and big data

    I’m still working on my predictions for 2013 and forward-looking into 2014, however I keep getting interrupted fending off vendors and their PR surrogates calling or emailing asking me if they can make contributions, or write my list for me (how thoughtful of them ;) ). For now one of my predictions is that I hope to get my predictions for 2013 done before 2013, however if you need something to hold you over, check this out from last year, or this from a few months ago.

    I will also say that for 2013, those who see or view cloud, virtualization, big data (and little data) in pragmatic terms will be very prosperous. On the other hand, those who have narrow or constrained views will be envious of the others. Likewise plenty of new additions to the buzzword bingo line up with software defined having strong representation.

    StorageIO industry trends cloud, virtualization and big data

    Like the Mayan calendar predictions, with annual technology predictions, are we reading them wrong, or are they simply wrong and who if anybody cares, or are they just garbage in and garbage out, or big data garbage in, big data garbage out results?

    In the meantime, I need to check that my local and cloud backups are working, try a restore test, have plenty of cash on hand, gas tanks full, cerveza in the fridge, propane for the generator and other things ready if the Mayans had it right, just off by a few days ;) .

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Storage comments from the field and customers in the trenches

    StorageIO industry trends cloud, virtualization and big data

    When I was in Europe presenting some sessions at conferences and doing some seminars last month I meet and spoke with one of the attendees at the StorageExpo Holland event. The persons name (Han Breemer) came up to visit with me after one of my presentations that include SSD is in your future: When, where, with what and how, and Cloud and Virtual Data Storage Networking industry trends and perspectives. Note you can find additional material from various conferences and events on the backup, restore, BC, DR and archiving accessible via the resources menu on the StorageIO web site.

    As I always do, I invite attendees to feel free and follow-up via email, twitter, Linked In, Google+ or other venue with questions, comments, discussions and what they are seeing or running into in their environments.

    Some of the many different items discussed during my StorageExpo presentations included:

    Recently Hans followed up and sent me some comments and asked if I would be willing to share them with others such as who ever happens to read this. I also suggested to Hans that he also start a blog (here is link to his new blog), and that I would be happy to post his comments for others to see and join in the conversation which are shown below.

    Hans Breemer wrote:

    Hi Greg,

    we met each other recently at the Dutch Storage Expo after one of your sessions. We briefly discussed the current trends in the storage market, and the “risks” or “threats” (read: challenges) it means to “us”, the storage guys. Often neglected by the sales guys…

    Please allow me a few lines to elaborate a bit more and share some thoughts from the field. :-)

    1. Bigger is not better?

    Each iteration in the new disk technologies (SATA or SAS) means we get less IOPS for the bucks. Pound for pound that is. Of course the absolute amount of IOPS we can get from a HDD increases all the time. where 175 IOPS was top speed a few years ago, we sometimes see figures close to 220 IOPS per physical drive now. This looks good in the brochure, just as the increased capacity does. However, what the brochure doesn’t tell us that if we look at the IOPS/capacity ratio, we’re walking backwards. a few years ago we could easily sell over 1000 IOPS/TB. Currently we can’t anymore. We’re happy to reach 500 IOPS/TB. I know this has always been like that. However with the introduction of SATA in the enterprise storage world, I feel things have gotten even worse.

    2. But how about SSD’s then?

    True and agree. In the world of HDD’s growing bigger and bigger, we actually need SSD’s, and this technology is the way forward in an IOPS perspective. SSD’s have a great future ahead of them (despite being with us already for some time). I do doubt that at the moment SSD’s already have the economical ability to fill the gap though. They offer many of thousands of IOPS, and for dedicated high-end solutions they offer what we weren’t able to deliver for decades. More IOPS than you need! But what about the “1000 IOPS/TB” market? Let’s call it the middle market.

    3. SSD’s as a lubricant?

    You must have heard every vendor about Adaptive Storage Tiering, Auto Tiering etc. All based on the theorem that most of our IO’s come from a relative small disk section. Thus we can improve the total performance of our array by only adding a few percent of SSD. Smart technology identifies the hot tracks on our disks, and promotes these to SSD’s. We can even demote cold tracks to big SATA drives. Think green, think ecological footprint, etc. For many applications this works well. Regular Windows server, file servers, VMWare ESX server actually seems to like adaptive storage tiering ,and I think I know why, a positive tradeoff of using VMDK’s. (I might share a few lines about FAST VP do’s and dont’s next time if you don’t mind)

    4. How about the middle market them you might ask? or, SSD’s as a band-aid?

    For the middle market, the above developments is sort of disaster. Think SAP running on Sun Solaris, think the average Microsoft SQL Server, think Oracle databases. These are the typical applications that need “middle market” IOPS. Many of these applications have a freakish IO pattern. OLTP during daytime, backup in the evening and batch jobs at night. Not to mention end of month runs, DTA (Dev-Test-Acceptance) streets that sleep for two weeks or are constantly upgraded or restored. These applications hardly benefit from “smart technologies”. The IO behavior is too random, too unpredictable leading to saturated SATA pools, and EFD’s that are hardly doing more IO’s than the FC drives they’re supposed to relief. Add more SSD’s we’re told. Use less SATA we’re told. but it hardly works. Recently we acquired a few new Vmax arrays without EFD or FASTVP, for the sole purpose of hosting these typical middle market applications. Affordable, predictable performance. But then again, our existing Vmax 20k had full size 600GB 15rpm drives, with the Vmax 40k we’re “encouraged” to use small form factor 600GB 10krpm drives. Again a small step backwards?

    5. The storage tiering debacle.

    Last but not least, some words I’d like to share with you about storage tiering. We’re encouraged (again) to sell storage in different tiers. Makes sense. To some extent it does yes. Host you most IO eager application on expensive, SSD based storage. And host your DTA or other less business critical application on FC or SATA quality HDD’s. But what if the less business critical application needs to be backed up in the evening, and while doing so completely saturates your SATA pool? Or what if the Dev server creates just as many IO’s as the Prod environment does? People don’t seem to care it seems. To have people realize how much IO’s they actually need and use, we are reporting IO graphs for all servers in our environment. Our tiering model is based on IOPS/TB and IO response time.

    Tier X would be expensive, offering 800 IOPS/TB @ avg 10ms
    Tier Y would be the cheaper option offering 400 IOPS/TB @ avg 15 ms

    The next step will be to implement front end controls an actually limit a host to some ceiling. for instance, 2 times the limit described in the tier description. thus allowing for peak loads and backups.

    Do we need to? I think so…

    Greg, this small message is slowly turning into a plea. And that is actually what it is, a plea to our storage vendors, and to our evangelists. If they want us to deliver, I feel they should talk to us, and listen to us (and you!).

    Cheers,

    Hans Breemer 

    ps, I love my job, this world and my role to translate promises and demands into solutions that work for my customers. I do take care though not to create solution that will not work, despite what the brochure said.

    pps, please feel free to share the above if needed.

    Here is my response to Hans:

    Hello Hans good to hear from you and thanks for the comments.

    Great perspectives and in the course of talking with your peers around the world, you are not alone in your thinking.

    Often I see disconnects between customers and vendors. Vendors (often driven by their market research) they know what the customer needs and issues are, and many actually do. However I often see a reliance on market research data with many degrees of separation as opposed to direct and candied insight. Likewise some vendors spend more time talking about how they listen to the customer vs. how time they actually do so.

    On the other hand, I routinely see customers fall into the trap of communicating wants (nice to haves) instead of articulating needs (what is required). Then there is confusing industry adoption with customer deployment, not to mention concerns over vendor, technology or services lock-in.

    Hope all else is well.

    Cheers
    gs

    Check out Hans new blog and feel free to leave your comments and perspectives here or via other venues.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    HPs big December 3rd storage announcement

    HP has been talking and promoting for several weeks (ok, months) their upcoming December 3rd storage announcements from the HP discovery event in Frankfurt Germany.

    Well its now afternoon which means the early Monday morning December 3rd embargos have been lifted so I can now talk about what HP shared last Friday about todays announcements. Basically what I received was a series of press releases as well link to their updated web site providing information about todays announcements.

    HP has enhanced the 3PAR aka P10000 with new models including for entry-level, as well as for higher performance enterprises needs. This also should beg the question for many longtime EVA (excuse me, P6000) customers, have they hit the end of the line? For scale out storage, HP has the StoreAll solutions (think about products formerly marketed as certain X9000 models based on Ibrix) with enhancements for analytics, bulk and various types of big data. In addition HP has enhanced its backup and recovery capabilities and Dedupe products including integration with Autonomy (here and here) along with capacity on demand services.

    New 3PAR (P10000 models)

    New StoreAll storage system

    From the surface and what I have been able to see so far, looks like a good set of incremental enhancements from HP. Not much else to say until I can get some time to dig around deep to see what can be found on more details, however check out Calvin Zito (aka @hpstorageguy) the HP storage blogger who should have more information from HP.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Ceph Day Amsterdam 2012 (Object and cloud storage)

    StorageIO industry trends cloud, virtualization and big data

    Recently while I was in Europe presenting some sessions at conferences and doing some seminars, I was invited by Ed Saipetch (@edsai) of Inktank.com to attend the first Ceph Day in Amsterdam.

    Ceph day image

    As luck or fate would turn out, I was in Nijkerk which is about an hour train ride from Amsterdam central station plus a free day in my schedule. After a morning train ride and nice walk from Amsterdam Central I arrived at the Tobacco Theatre (a former tobacco trading venue) where Ceph Day was underway, and in time for lunch of Krokettens sandwich.

    Attendees at Ceph Day

    Lets take a quick step back and address for those not familiar what is Ceph (Cephalanthera) and why it was worth spending a day to attend this event. Ceph is an open source distributed object scale out (e.g. cluster or grid) software platform running on industry standard hardware.

    Dell server supporting ceph demoSketch of ceph demo configuration

    Ceph is used for deploying object storage, cloud storage and managed services, general purpose storage for research, commercial, scientific, high performance computing (HPC) or high productivity computing (commercial) along with backup or data protection and archiving destinations. Other software similar in functionality or capabilities to Ceph include OpenStack Swift, Basho Riak CS, Cleversafe, Scality and Caringo among others. There are also the tin wrapped software (e.g. appliances or pre-packaged) solutions such as Dell DX (Caringo), DataDirect Networks (DDN) WOS, EMC ATMOS and Centera, Amplidata and HDS HCP among others. From a service standpoint, these solutions can be used to build services similar Amazon S3 and Glacier, Rackspace Cloud files and Cloud Block, DreamHost DreamObject and HP Cloud storage among others.

    Ceph cloud and object storage architecture image

    At the heart of Ceph is RADOS a distributed object store that consists of peer nodes functioning as object storage devices (OSD). Data can be accessed via REST (Amazon S3 like) APIs, Libraries, CEPHFS and gateway with information being spread across nodes and OSDs using a CRUSH based algorithm (note Sage Weil is one of the authors of CRUSH: Controlled, Scalable, Decentralized Placement of Replicated Data). Ceph is scalable in terms of performance, availability and capacity by adding extra nodes with hard disk drives (HDD) or solid state devices (SSDs). One of the presentations pertained to DreamHost that was an early adopter of Ceph to make their DreamObjects (cloud storage) offering.

    Ceph cloud and object storage deployment image

    In addition to storage nodes, there are also an odd number of monitor nodes to coordinate and manage the Ceph cluster along with optional gateways for file access. In the above figure (via DreamHost), load balancers sit in front of gateways that interact with the storage nodes. The storage node in this example is a physical server with 12 x 3TB HDDs each configured as a OSD.

    Ceph dreamhost dreamobject cloud and object storage configuration image

    In the DreamHost example above, there are 90 storage nodes plus 3 management nodes, the total raw storage capacity (no RAID) is about 3PB (12 x 3TB = 36TB x 90 = 3.24PB). Instead of using RAID or mirroring, each objects data is replicated or copied to three (e.g. N=3) different OSDs (on separate nodes), where N is adjustable for a given level of data protection, for a usable storage capacity of about 1PB.

    Note that for more usable capacity and lower availability, N could be set lower, or a larger value of N would give more durability or data protection at higher storage capacity overhead cost. In addition to using JBOD configurations with replication, Ceph can also be configured with a combination of RAID and replication providing more flexibility for larger environments to balance performance, availability, capacity and economics.

    Ceph dreamhost and dreamobject cloud and object storage deployment image

    One of the benefits of Ceph is the flexibility to configure it how you want or need for different applications. This can be in a cost-effective hardware light configuration using JBOD or internal HDDs in small form factor generally available servers, or high density servers and storage enclosures with optional RAID adapters along with SSD. This flexibility is different from some cloud and object storage systems or software tools which take a stance of not using or avoiding RAID vs. providing options and flexibility to configure and use the technology how you see fit.

    Here are some links to presentations from Ceph Day:
    Introduction and Welcome by Wido den Hollander
    Ceph: A Unified Distributed Storage System by Sage Weil
    Ceph in the Cloud by Wido den Hollander
    DreamObjects: Cloud Object Storage with Ceph by Ross Turk
    Cluster Design and Deployment by Greg Farnum
    Notes on Librados by Sage Weil

    Presentations during ceph day

    While at Ceph day, I was able to spend a few minutes with Sage Weil Ceph creator and founder of inktank.com to record a pod cast (listen here) about what Ceph is, where and when to use it, along with other related topics. Also while at the event I had a chance to sit down with Curtis (aka Mr. Backup) Preston where we did a simulcast video and pod cast. The simulcast involved Curtis recording this video with me as a guest discussing Ceph, cloud and object storage, backup, data protection and related themes while I recorded this pod cast.

    One of the interesting things I heard, or actually did not hear while at the Ceph Day event that I tend to hear at related conferences such as SNW is a focus on where and how to use, configure and deploy Ceph along with various configuration options, replication or copy modes as opposed to going off on erasure codes or other tangents. In other words, instead of focusing on the data protection protocol and algorithms, or what is wrong with the competition or other architectures, the Ceph Day focused was removing cloud and object storage objections and enablement.

    Where do you get Ceph? You can get it here, as well as via 42on.com and inktank.com.

    Thanks again to Sage Weil for taking time out of his busy schedule to record a pod cast talking about Ceph, as well 42on.com and inktank for hosting, and the invitation to attend the first Ceph Day in Amsterdam.

    View of downtown Amsterdam on way to train station to return to Nijkerk
    Returning to Amsterdam central station after Ceph Day

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Garbage data in, garbage information out, big data or big garbage?

    StorageIO industry trends cloud, virtualization and big data

    Do you know the computer technology saying, garbage data in results in garbage information out?

    In other words even with the best algorithms and hardware, bad, junk or garbage data put in results in garbage information delivered. Of course, you might have data analysis and cleaning software to look for, find and remove bad or garbage data, however that’s for a different post on another day.

    If garbage data in results in garbage information out, does garbage big data in result in big garbage out?

    I’m sure my sales and marketing friends or their surrogates will jump at the opportunity to tell me why and how big data is the solution to the decades old garbage data in problem.

    Likewise they will probably tell me big data is the solution to problems that have not even occurred or been discovered yet, yeah right.

    However garbage data does not discriminate or show preference towards big data or little data, in fact it can infiltrate all types of data and systems.

    Lets shift gears from big and little data to how all of that information is protected, backed up, replicated, copied for HA, BC, DR, compliance, regulatory or other reasons. I wonder how much garbage data is really out there and many garbage backups, snapshots, replication or other copies of data exist? Sounds like a good reason to modernize data protection.

    If we don’t know where the garbage data is, how can we know if there is a garbage copy of the data for protection on some other tape, disk or cloud. That also means plenty of garbage data to compact (e.g. compress and dedupe) to cut its data footprint impact particular with tough economic times.

    Does this mean then that the cloud is the new destination for garbage data in different shapes or forms, from online primary to back up and archive?

    Does that then make the cloud the new virtual garbage dump for big and little data?

    Hmm, I think I need to empty my desktop trash bin and email deleted items among other digital house keeping chores now.

    On the other hand, just had a thought about orphaned data and orphaned storage, however lets leave those sleeping dogs lay where they rest for now.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Data Center Infrastructure Management (DCIM) and IRM

    StorageIO industry trends cloud, virtualization and big data

    There are many business drivers and technology reasons for adopting data center infrastructure management (DCIM) and infrastructure Resource Management (IRM) techniques, tools and best practices. Today’s agile data centers need updated management systems, tools, and best practices that allow organizations to plan, run at a low-cost, and analyze for workflow improvement. After all, there is no such thing as an information recession driving the need to move process and store more data. With budget and other constraints, organizations need to be able to stretch available resources further while reducing costs including for physical space and energy consumption.

    The business value proposition of DCIM and IRM includes:

    DCIM, Data Center, Cloud and storage management figure

    Data Center Infrastructure Management or DCIM also known as IRM has as their names describe a focus around management resources in the data center or information factory. IT resources include physical floor and cabinet space, power and cooling, networks and cabling, physical (and virtual) servers and storage, other hardware and software management tools. For some organizations, DCIM will have a more facilities oriented view focusing on physical floor space, power and cooling. Other organizations will have a converged view crossing hardware, software, facilities along with how those are used to effectively deliver information services in a cost-effective way.

    Common to all DCIM and IRM practices are metrics and measurements along with other related information of available resources for gaining situational awareness. Situational awareness enables visibility into what resources exist, how they are configured and being used, by what applications, their performance, availability, capacity and economic effectiveness (PACE) to deliver a given level of service. In other words, DCIM enabled with metrics and measurements that matter allow you to avoid flying blind to make prompt and effective decisions.

    DCIM, Data Center and Cloud Metrics Figure

    DCIM comprises the following:

    • Facilities, power (primary and standby, distribution), cooling, floor space
    • Resource planning, management, asset and resource tracking
    • Hardware (servers, storage, networking)
    • Software (virtualization, operating systems, applications, tools)
    • People, processes, policies and best practices for management operations
    • Metrics and measurements for analytics and insight (situational awareness)

    The evolving DCIM model is around elasticity, multi-tenant, scalability, flexibility, and is metered and service-oriented. Service-oriented, means a combination of being able to rapidly give new services while keeping customer experience and satisfaction in mind. Also part of being focused on the customer is to enable organizations to be competitive with outside service offerings while focusing on being more productive and economic efficient.

    DCIM, Data Center and Cloud E2E management figure

    While specific technology domain areas or groups may be focused on their respective areas, interdependencies across IT resource areas are a matter of fact for efficient virtual data centers. For example, provisioning a virtual server relies on configuration and security of the virtual environment, physical servers, storage and networks along with associated software and facility related resources.

    You can read more about DCIM, ITSM and IRM in this white paper that I did, as well as in my books Cloud and Virtual Data Storage Networking (CRC Press) and The Green and Virtual Data Center (CRC Press).

    Ok, nuff said, for now.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved