Amazon cloud storage options enhanced with Glacier

StorageIO industry trend for storage IO

In case you missed it, Amazon Web Services (AWS) has enhanced their cloud services (Elastic Cloud Compute or EC2) along with storage offerings. These include Relational Database Service (RDS), DynamoDB, Elastic Block Store (EBS), and Simple Storage Service (S3). Enhancements include new functionality along with availability or reliability in the wake of recent events (outages or service disruptions). Earlier this year AWS announced their Cloud Storage Gateway solution that you can read an analysis here. More recently AWS announced provisioned IOPS among other enhancements (see AWS whats new page here).

Amazon Web Services logo

Before announcing Glacier, options for Amazon storage services relied on general purpose S3, or EBS with other Amazon services. S3 has provided users the ability to select different availability zones (e.g. geographical regions where data is stored) along with level of reliability for different price points for their applications or services being offered.

Note that AWS S3 flexibility lends itself to individuals or organizations using it for various purposes. This ranges from storing backup or file sharing data to being used as a target for other cloud services. S3 pricing options vary depending on which availability zones you select as well as if standard or reduced redundancy. As its name implies, reduced redundancy trades lower availability recovery time objective (RTO) in exchange for lower cost per given amount of space capacity.

AWS has now announced a new class or tier of storage service called Glacier, which as its name implies moves very slow and capable of supporting large amounts of data. In other words, targeting inactive or seldom accessed data where emphasis is on ultra-low cost in exchange for a longer RTO. In exchange for an RTO that AWS is stating that it can be measured in hours, your monthly storage cost can be as low as 1 cent per GByte or about 12 cents per year per GByte plus any extra fees (See here).

Here is a note that I received from the Amazon Web Services (AWS) team:

Dear Amazon Web Services Customer,
We are excited to announce the immediate availability of Amazon Glacier – a secure, reliable and extremely low cost storage service designed for data archiving and backup. Amazon Glacier is designed for data that is infrequently accessed, yet still important to keep for future reference. Examples include digital media archives, financial and healthcare records, raw genomic sequence data, long-term database backups, and data that must be retained for regulatory compliance. With Amazon Glacier, customers can reliably and durably store large or small amounts of data for as little as $0.01/GB/month. As with all Amazon Web Services, you pay only for what you use, and there are no up-front expenses or long-term commitments.

Amazon Glacier is:

  • Low cost– Amazon Glacier is an extremely low-cost, pay-as-you-go storage service that can cost as little as $0.01 per gigabyte per month, irrespective of how much data you store.
  • Secure – Amazon Glacier supports secure transfer of your data over Secure Sockets Layer (SSL) and automatically stores data encrypted at rest using Advanced Encryption Standard (AES) 256, a secure symmetrix-key encryption standard using 256-bit encryption keys.
  • Durable– Amazon Glacier is designed to give average annual durability of 99.999999999% for each item stored.
  • Flexible -Amazon Glacier scales to meet your growing and often unpredictable storage requirements. There is no limit to the amount of data you can store in the service.
  • Simple– Amazon Glacier allows you to offload the administrative burdens of operating and scaling archival storage to AWS, and makes long term data archiving especially simple. You no longer need to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and repair, or time-consuming hardware migrations.
  • Designed for use with other Amazon Web Services – You can use AWS Import/Export to accelerate moving large amounts of data into Amazon Glacier using portable storage devices for transport. In the coming months, Amazon Simple Storage Service (Amazon S3) plans to introduce an option that will allow you to seamlessly move data between Amazon S3 and Amazon Glacier using data lifecycle policies.

Amazon Glacier is currently available in the US-East (N. Virginia), US-West (N. California), US-West (Oregon), EU-West (Ireland), and Asia Pacific (Japan) Regions.

A few clicks in the AWS Management Console are all it takes to setup Amazon Glacier. You can learn more by visiting the Amazon Glacier detail page, reading Jeff Barrs blog post, or joining our September 19th webinar.
Sincerely,
The Amazon Web Services Team

StorageIO industry trend for storage IO

What is AWS Glacier?

Glacier is low-cost for lower performance (e.g. access time) storage suited to data applications including archiving, inactive or idle data that you are not in a hurry to retrieve. Pay as you go pricing that can be as low as $0.01 USD per GByte per month (and other optional fees may apply, see here) depending on availability zone. Availability zone or regions include US West coast (Oregon or Northern California), US East Coast (Northern Virginia), Europe (Ireland) and Asia (Tokyo).

Amazon Web Services logo

Now what is understood should have to be discussed, however just to be safe, pity the fool who complains about signing up for AWS Glacier due to its penny per month per GByte cost and it being too slow for their iTunes or videos as you know its going to happen. Likewise, you know that some creative vendor or their surrogate is going to try to show a miss-match of AWS Glacier vs. their faster service that caters to a different usage model; it is just a matter of time.

StorageIO industry trend for storage IO

Lets be clear, Glacier is designed for low-cost, high-capacity, slow access of infrequently accessed data such as an archive or other items. This means that you will be more than disappointed if you try to stream a video, or access a document or photo from Glacier as you would from S3 or EBS or any other cloud service. The reason being is that Glacier is designed with the premise of low-cost, high-capacity, high availability at the cost of slow access time or performance. How slow? AWS states that you may have to wait several hours to reach your data when needed, however that is the tradeoff. If you need faster access, pay more or find a different class and tier of storage service to meet that need, perhaps for those with the real need for speed, AWS SSD capabilities ;).

Here is a link to a good post over at Planforcloud.com comparing Glacier vs. S3, which is like comparing apples and oranges; however, it helps to put things into context.

Amazon Web Services logo

In terms of functionality, Glacier security includes secure socket layer (SSL), advanced encryption standard (AES) 256 (256-bit encryption keys) data at rest encryption along with AWS identify and access management (IAM) policies.

Persistent storage designed for 99.999999999% durability with data automatically placed in different facilities on multiple devices for redundancy when data is ingested or uploaded. Self-healing is accomplished with automatic background data integrity checks and repair.

Scale and flexibility are bound by the size of your budget or credit card spending limit along with what availability zones and other options you choose. Integration with other AWS services including Import/Export where you can ship large amounts of data to Amazon using different media and mediums. Note that AWS has also made a statement of direction (SOD) that S3 will be enhanced to seamless move data in and out of Glacier using data policies.

Part of stretching budgets for organizations of all size is to avoid treating all data and applications the same (key theme of data protection modernization). This means classifying and addressing how and where different applications and data are placed on various types of servers, storage along with revisiting modernizing data protection.

While the low-cost of Amazon Glacier is an attention getter, I am looking for more than just the lowest cost, which means I am also looking for reliability, security among other things to gain and keep confidence in my cloud storage services providers. As an example, a few years ago I switched from one cloud backup provider to another not based on cost, rather functionality and ability to leverage the service more extensively. In fact, I could switch back to the other provider and save money on the monthly bills; however I would end up paying more in lost time, productivity and other costs.

StorageIO industry trend for storage IO

What do I see as the barrier to AWS Glacier adoption?

Simple, getting vendors and other service providers to enhance their products or services to leverage the new AWS Glacier storage category. This means backup/restore, BC and DR vendors ranging from Amazon (e.g. releasing S3 to Glacier automated policy based migration), Commvault, Dell (via their acquisitions of Appassure and Quest), EMC (Avamar, Networker and other tools), HP, IBM/Tivoli, Jungledisk/Rackspace, NetApp, Symantec and others, not to mention cloud gateway providers will need to add support for this new capabilities, along with those from other providers.

As an Amazon EC2 and S3 customer, it is great to see Amazon continue to expand their cloud compute, storage, networking and application service offerings. I look forward to actually trying out Amazon Glacier for storing encrypted archive or inactive data to compliment what I am doing. Since I am not using the Amazon Cloud Storage Gateway, I am looking into how I can use Rackspace Jungledisk to manage an Amazon Glacier repository similar to how it manages my S3 stores.

Some more related reading:
Only you can prevent cloud data loss
Data protection modernization, more than swapping out media
Amazon Web Services (AWS) and the NetFlix Fix?
AWS (Amazon) storage gateway, first, second and third impressions

As of now, it looks like I will have to wait for either Jungledisk adds native support as they do today for managing my S3 storage pool today, or, the automated policy based movement between S3 and Glacier is transparently enabled.

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

IBM buys flash solid state device (SSD) industry veteran TMS

How much flash (or DRAM) based Solid State Device (SSD) do you want or need?

IBM recently took a flash step announcing it wants and needs more SSD capabilities in different packaging and functionality capabilities to meet the demands and opportunities of customers, business partners and prospects by acquiring Texas Memory Systems (TMS).

IBM buys SSD flash vendor TMS

Unlike most of the current generation of SSD vendors besides those actually making the dies (chips or semiconductors) or SSD drives that are startups or relatively new, TMS is the industry veteran. Where most of the current SSD vendors experiences (as companies) is measured in months or at best years, TMS has seen several generations and SSD adoption cycles during its multi-decade existence.

IBM buys SSD vendor Texas Memory Systems TMS

What this means is that TMS has been around during past dynamic random access memory (DRAM) based SSD cycles or eras, as well as being an early adopter and player in the current nand flash SSD era or cycle.

Granted, some in the industry do not consider the previous DRAM based generation of products as being SSD, and vice versa, some DRAM era SSD aficionados do not consider nand flash as being real SSD. Needless to say that there are many faces or facets to SSD ranging in media (DRAM, and nand flash among others) along with packaging for different use cases and functionality.

IBM along with some other vendors recognize that the best type of IO is the one that you do not have to do. However reality is that some type of Input Output (IO) operations need to be done with computer systems. Hence the second best type of IO is the one that can be done with the least impact to applications in a cost-effective way to meet specific service level objectives (SLO) requirements. This includes leveraging main memory or DRAM as cache or buffers along with server-based PCIe SSD flash cards as cache or target devices, along with internal SSD drives, as well as external SSD drives and SSD drives and flash cards in traditional storage systems or appliances as well as purpose-built SSD storage systems.

While TMS does not build the real nand flash single level cell (SLC) or multi-level cell (MLC) SSD drives (like those built by Intel, Micron, Samsung, SANdisk, Seagate, STEC and Western Digital (WD) among others), TMS does incorporate nand flash chips or components that are also used by others who also make nand flash PCIe cards and storage systems.

StorageIO industry trend for storage IO

IMHO this is a good move for both TMS and IBM, both of whom have been StorageIO clients in the past (here, here and here) that was a disclosure btw ;) as it gives TMS, their partners and customers a clear path and large organization able to invest in the technologies and solutions on a go forward basis. In other words, TMS who had looked to be bought gets certainty about their future as do they clients.

IBM who has used SSD based components such as PCIe flash SSD cards and SSD based drives from various suppliers gets a PCIe SSD card of their own, along with purpose-built mature SSD storage systems that have lineages to both DRAM and nand flash-based experiences. Thus IBM controls some of their own SSD intellectual property (e.g. IP) for PCIe cards that can go in theory into their servers, as well as storage systems and appliances that use Intel based (e.g. xSeries from IBM) and IBM Power processor based servers as a platform such. For example DS8000 (Power processor), and Intel based XIV, SONAS, V7000, SVC, ProtecTier and Pursystems (some are Power based).

In addition IBM also gets a field proven purpose-built all SSD storage system to compete with those from startups (Kaminario, Purestorage, Solidfire, Violin and Whiptail among others), as well as those being announced from competitors such as EMC (e.g. project X and project thunder) in addition to SSD drives that can go into servers and storage systems.

The question should not be if SSD is in your future, rather where will you be using it, in the server or a storage system, as a cache or a target, as a PCIe target or cache card or as a drive or as a storage system. This also means the question of how much SSD do you need along with what type (flash or DRAM), for what applications and how configured among other topics.

Storage and Memory Hirearchy diagram where SSD fits

What this means is that there are many locations and places where SSD fits, one type of product or model does not fit or meet all requirements and thus IBM with their acquisition of TMS, along with presumed partnership with other SSD based components will be able to offer a diverse SSD portfolio.

StorageIO industry trend for storage IO

The industry trend is for vendors such as Cisco, Dell, EMC, IBM, HP, NetApp, Oracle and others all of whom are either physical server and storage vendors, or in the case of EMC, virtual servers partnered with Cisco (vBlock and VCE) and Lenovo for physical servers.

Different types and locations for SSD

Thus it only makes sense for those vendors to offer diverse SSD product and solution offerings to meet different customer and application needs vs. having a single solution that users adapt to. In other words, if all you have is a hammer, everything needs to look like a nail, however if you have a tool box of various technologies, then it comes down to being able to leverage including articulating what to use when, where, why and how for different situations.

I think this is a good move for both IBM and TMS. Now lets watch how IBM and TMS can go beyond the press release, slide decks and webex briefings covering why it is a good move to justify their acquisition and plans, moving forward and to see the results of what is actually accomplished near and long-term.

Read added industry trends and perspective commentary about IBM buying TMS here and here, as well as check out these related posts and content:

How much SSD do you need vs. want?
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Has SSD put Hard Disk Drives (HDDs) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD
Speaking of speeding up business with SSD storage
Is SSD dead? No, however some vendors might be
Part I: PureSystems, something old, something new, something from big blue
The Many Faces of Solid State Devices/Disks (SSD)
SSD and Green IT moving beyond green washing

Meanwhile, congratulations to both IBM and TMS, ok, nuff said (for now).

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Open Data Center Alliance (ODCA) publishes two new cloud usage models

The Open Data Center Alliance (ODCA) has announced and published more documents for data center customers of cloud usage. These new cloud usage models for to address customer demands for interoperability of various clouds and services before for Infrastructure as a Service (IaaS) among other topics which are now joined by the new Software as a Service (SaaS), Platform as a Service (PaaS) and foundational document for cloud interoperability.

Unlike most industry trade groups or alliances that are vendor driven or centric, ODCA is consortium of global IT leaders (e.g. customers) that is vendor independent and comprises as 12 member steering committee from member companies (e.g. customers), learn more about ODCA here.

Disclosure note, StorageIO is an ODCA member, visit here to become an ODCA member.

From the ODCA announcement of the new documents:

The documents detail expectations for market delivery to the organizations mission of open, industry standard cloud solution adoption, and discussions have already begun with providers to help accelerate delivery of solutions based on these new requirements. This suite of requirements was joined by a Best Practices document from National Australia Bank (NAB) outlining carbon footprint reductions in cloud computing. NAB’s paper illustrates their leadership in innovative methods to report carbon emissions in the cloud and aligns their best practices to underlying Alliance requirements. All of these documents are available in the ODCA Documents Library.

The PaaS interoperability usage model outlines requirements for rapid application deployment, application scalability, application migration and business continuity. The SaaS interoperability usage model makes applications available on demand, and encourages consistent mechanisms, enabling cloud subscribers to efficiently consume SaaS via standard interactions. In concert with these usage models, the Alliance published the ODCA Guide to Interoperability, which describes proposed requirements for interoperability, portability and interconnectivity. The documents are designed to ensure that companies are able to move workloads across clouds.

It is great to see IT customer driven or centric groups step and actually deliver content and material to help their peers, or in some cases competitors that compliments information provided by vendors and vendor driven trade groups.

As with technologies, tools and services that often are seen as competitive, a mistake would be viewing ODCA as or in competition with other industry trade groups and organizations or vise versa. Rather, IT organizations and vendors can and should leverage the different content from the various sources. This is an opportunity for example vendors to learn more about what the customers are thinking or concerned about as opposed to telling IT organizations what to be looking at and vise versa.

Granted some marketing organizations or even trade groups may not like that and view groups such as ODCA as giving away control of who decides what is best for them. Smart vendors, vars, business partners, consultants and advisors are and will leverage material and resources such as ODCA, and likewise, groups like ODCA are open to including a diverse membership unlike some pay to play industry vendor centric trade groups. If you are a vendor, var or business partner, don’t look at ODCA as a threat, instead, explore how your customers or prospects may be involved with, or using ODCA material and leverage that as a differentiator between you and your competitor.

Likewise don’t be scared of vendor centric industry trade groups, alliances or consortiums, even the pay to play ones can have some value, although some have more value than others. For example from a storage and storage networking perspective, there are the Storage Networking Industry Association (SNIA) along with their various groups focused on Green and Energy along with Cloud Data Management Initiative (CDMI) related topics among others. There is also the SCSI Trade Association (STA) along with the Open Virtualization Alliance (OVA) not to mention the Open Fabric Alliance (OVA), Open Networking Foundation (ONF) and Computer Measurement Group (CMG) among many others that do good work and offer value with diverse content and offerings, some of which are free including to non members.

Learn more about the ODCA here, along with access various documents including usage models in the ODCA document library here.

While you are at, why not join StorageIO and other members by signing up to become a part of the ODCA here.

Ok, nuff said for now.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Over 1,000 entries now on the StorageIO industry links page

Industry trends and perspective data protection modernization

Is your company, organization or one that you are a fan of, or represent listed on the StorageIO industry links page (click here to learn more about it).

The StorageIO industry links page has been updated with over thousand different industry related companies, vendors, vars, trade groups, part and solution suppliers along with cloud and managed service providers. The common theme with these industry links is information and data infrastructures which means severs, storage, IO and networking, hardware, software, applications and tools, services, products and related items for traditional, virtual and cloud environments.

StorageIO server storage IO networking cloud and virtualization links

The industry links page is accessed from the StorageIO main web page via the Tools and Links menu tab, or via the URL https://storageio.com/links. An example of the StorageIO industry links page is shown below with six different menu tabs in alphabetical order.

StorageIO server storage IO networking cloud and virtualization links

Know of a company, service or organization that is not listed on the links page, if so, send an email note to info at storageio.com. If your company or organization is listed, contact StorageIO to discuss how to expand your presence on the links page and other related options.

Visit the updated StorageIO industry links page and watch for more updates, and click here to learn more about the links page.

Ok, nuff said for now.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

What are some endangered IT skills species?

Dan Tynan has a good piece over at InfoWorld discussing the 9 most endangered species (e.g. skill sets) in the IT workforce. His article is along the lines that the IT job landscape is evolving rapidly and provides some ideas and points for discussion how to avoid becoming extinct.

Here is an excerpt from Dans article:

How to avoid extinction: Broaden and diversify your knowledge base now, while there’s still time, says Greg Schulz, senior adviser for the StorageIO Group, an IT infrastructure consultancy.

“If you are the hardware guy, you better start learning and embracing software,” he says. “If you are the software geek, time to appreciate the hardware. If you are infrastructure-focused, it’s time to learn about the business and its applications. You don’t want to be overgeneralized, but make sure to balance broader knowledge with depth in different areas.”

Check out Dans article to see what the other endangered skill sets are, along with other perspectives by myself and others as well as what you can do to avoid becoming extinct. Hmm, maybe read a book? ;)

Ok, nuff said.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go

StorageIO industry trends and perspectives

In case you missed it, VMware recently announced spending $1.05 billion USD acquiring startup Nicira for their virtualization and software technology that enables software defined networks (SDN). Also last week Oracle was in the news getting its hands slapped by for making misleading advertisement performance claims vs. IBM.

On the heals of VMware buying Nicira for software defined networking (SDN) or what is also known as IO virtualization (IOV) and virtualized networking, Oracle is now claiming their own SDN capabilities with their announcement of intent to acquire Xsigo. Founded in 2004, Xsigo has a hardware platform combined with software to enable attachment of servers to different Fibre Channel (SAN) and Ethernet based (LAN) networks with their version of IOV.

Now its Oracle who has announced that it will be acquiring IO, networking, virtualization hardware and software vendor Xsigo founded in 2004 for an undisclosed amount. Xsigo has made its name in the IO virtualization (IOV) and converged networking along with server and storage virtualization space over the past several years including partnerships with various vendors.

Buzz word bingo

Technology buzzwords and buzz terms can often be a gray area leaving plenty of room for marketers and PR folks to run with. Case in point AaaS, Big data, Cloud, Compliance, Green, IaaS, IOV, Orchestration, PaaS and Virtualization among other buzzword bingo or XaaS topics. Since Xsigo has been out front in messaging and industry awareness around IO networking convergence of Ethernet based Local Area Networks (LANs) and Fibre Channel (FC) based Storage Area Networks (SANs), along with embracing InfiniBand, it made sense for them to play to their strength which is IO virtualization (aka IOV).

Too me and among others (here and here and here) it is interesting that Xsigo has not laid claims to being part of the software defined networking (SDN) movement or the affiliated OpenFlow networking initiatives as happens with Nicira (and Oracle for that matter). In the press release that the Oracle marketing and PR folks put out on a Monday morning, some of the media and press, both trade industry, financial and general news agency took the Oracle script hook line and sinker running with it.

What was effective is how well many industry trade pubs and their analysts simply picked up the press release story and ran with it in the all too common race to see who can get the news or story out first, or before it actually happens in some cases.

Image of media, news papers

Too be clear, not all pubs jumped including some of those mentioned by Greg Knieriemen (aka @knieriemen) over at SpeakinginTech highlights. I know some who took the time to call, ask around, leverage their journalistic training to dig, research and find out what this really meant vs. simply taking and running with the script. An example of one of those calls that I had was with Beth Pariseu (aka @pariseautt) that you can read her story here and here.

Interesting enough, the Xsigo marketers had not embraced the SDN term sticking with the more known (at least in some circles) VIO and VIO descriptions. What is also interesting is just last week Oracle marketing had their hands slapped by the Better Business Bureau (BBB) NAD after IBM complained about unfair performance based advertisements on ExaData.

Oracle Exadata

Hmm, I wonder if the SDN police or somebody else will lodge a similar complaint with the BBB on behalf of those doing SDN?

Both Oracle and Xsigo along with other InfiniBand (and some Ethernet and PCIe) focused vendors are members of the Open Fabric initiative, not to be confused with the group working on OpenFlow.

StorageIO industry trends and perspectives

Here are some other things to think about:

Oracle has a history of doing different acquisitions without disclosing terms, as well as doing them based on earn outs such as was the case with Pillar.

Oracle use Ethernet in the servers and appliances as well as has been an adopter of InfiniBand primarily for node to node communication, however also for server to application.

Oracle is also an investor in Mellanox the folks that make InfiniBand and Ethernet products.

Oracle has built various stacks including ExaData (Database machine), Exalogic, Exalytics and Database Appliance in addition to their 7000 series of storage systems.

Oracle has done earlier virtualization related acquisitions including Virtual Iron.

Oracle has a reputation with some of their customers who love to hate them for various reasons.

Oracle has a reputation of being aggressive, even by other market leader aggressive standards.

Integrated solution stacks (aka stack wars) or what some remember as bundles continues and Oracle has many solutions.

What will happen to Xsigo as you know it today (besides what the press releases are saying).

While Xsigo was not a member of the Open Networking Forum (ONF), Oracle is.

Xsigo is a member of the Open Fabric Alliance along with Oracle, Mellanox and others interested in servers, PCIe, InfiniBand, Ethernet, networking and storage.

StorageIO industry trends and perspectives

What’s my take?

While there are similarities in that both Nicira and Xsigo are involved with IO Virtualization, what they are doing, how they are doing it, who they are doing it with along with where they can play vary.

Not sure what Oracle paid however assuming that it was in the couple of million dollars or less, cash or combination of stock, both they and the investors as well as some of the employees, friends and family’s did ok.

Oracle also gets some intellectual property that they can combine with other earlier acquisitions via Sun and Virtual Iron along with their investment in InfiniBand (also now Ethernet) vendor Mellanox

Likewise, Oracle gets some extra technology that they can leverage in their various stacked or integrated (aka bundled) solutions for both virtual and physical environments.

For Xsigo customers the good news is that you now know who will be buying the company, however and should be questions about the future beyond what is being said in press releases.

Does this acquisition give Oracle a play in the software defined networking space like Nicira gives to VMware I would say no given their hardware dependency, however it does give Oracle some extra technology to play with.

Likewise while important and a popular buzzword topic (e.g. SDN), since OpenFlow comes up in conversations, perhaps that should be more of the focus vs. if a solution is all software or hardware and software.

StorageIO industry trends and perspectives

I also find it entertaining how last week the Better Business Bureau (BBB) and NAD (National Advertising Division) slapped Oracles hands after IBM complaints of misleading performance claims about Oracle ExaData vs. IBM. The reason I find it entertaining is not that Oracle had its hands slapped or that IBM complained to the BBB, rather how the Oracle marketers and PR folks came up with a spin around what could be called a proprietary SDN (hmm, pSDN ?) story feed it to the press and media who then ran with it.

Im not convinced that this is an all our launch of a war by Oracle vs. Cisco let alone any of the other networking vendors as some have speculated (makes for good headlines though). Instead Im seeing it as more of an opportunistic acquisition by Oracle most likely at a good middle of summer price. Now if Oracle really wanted to go to battle with Cisco (and others), then there are others to buy such as Brocade, Juniper, etc etc etc. However there are other opportunities for Oracle to be focused (or side tracked on right now).

Oh, lets also see what Cisco has to say about all of this which should be interesting.

Additional related links:
Data Center I/O Bottlenecks Performance Issues and Impacts
I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
I/O Virtualization (IOV) Revisited
Industry Trends and Perspectives: Converged Networking and IO Virtualization (IOV)
The function of XaaS(X) Pick a letter
What is the best kind of IO? The one you do not have to do
Why FC and FCoE vendors get beat up over bandwidth?

StorageIO industry trends and perspectives

If you are interested in learning more about IOV, Xisgo, or are having trouble sleeping, click here, here, here, here, here, here, here, here, here, here, here, here, here, or here (I think that’s enough links for now ;).

Ok, nuff said for now as I have probably requalified for being on the Oracle you know what list for not sticking to the story script, opps, excuse me, I mean press release message.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Modernizing data protection with certainty

Speaking of and about modernizing data protection, back in June I was invited to be a keynote presenter on industry trends and perspectives at a series of five dinner events (Boston, Chicago, Palo Alto, Houston and New York City) sponsored by Quantum (that is a disclosure btw).

backup, restore, BC, DR and archiving

The theme of the dinner events was an engaging discussion around modernizing data protection with certainty along with clouds, virtualization and related topics. Quantum and one of their business partner resellers started the event with introductions followed by an interactive discussion by myself, followed by David Chappa (@davidchapa ) who ties the various themes with what Quantum is doing along with some of their customer success stories.

Themes and examples for these events build on my book Cloud and Virtual Data Storage Networking including:

  • Rethinking how, when, where and why data is being protected
  • Big data, little data and big backup issues and techniques
  • Archive, backup modernization, compression, dedupe and storage tiering
  • Service level agreements (SLA) and service level objectives (SLO)
  • Recovery time objective (RTO) and recovery point objective (RPO)
  • Service alignment and balancing needs vs. wants, cost vs. risk
  • Protecting virtual, cloud and physical environments
  • Stretching your available budget to do more without compromise
  • People, processes, products and procedures

Quantum is among other industry leaders with multiple technology and solution offerings for addressing different aspects of data footprint reduction and data protection modernization. These include for physical, virtual and cloud environments along with traditional tape, disk based, compression, dedupe, archive, big data, hardware, software and management tools. A diverse group of attendees have been at the different events including enterprise and SMB, public, private and government across different sectors.

Following are links to some blog posts that covered first series of events along with some of the specific themes and discussion points from different cities:

Via ITKE: The New Realities of Data Protection
Via ITKE: Looking For Certainty In The Cloud
Via ITKE: Success Stories in Data Protection: Cloud virtualization
Via ITKE: Practical Solutions for Data Protection Challenges
Via David Chappas blog

If you missed attending any of the above events, more dates are being added in August and September including stops in Cleveland, Raleigh, Atlanta, Washington DC, San Diego, Connecticut and Philadelphia with more details here.

Ok, nuff said for now, hope to see you at one of the upcoming events.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Kudos to Lenovo: Customer service redefined, or re-established?

Kudos to Lenovo who I called yesterday to get a replacement key for my X1 laptop keypad.

After spending time on their website including finding the part number, sku and other information, I could not figure out how to actually order the part. Concerned about calling and getting routed between different call centers as is too often the case, I finally decided to give a try on the phone route.

I was surprised, no, shocked at how quick and easy it was once I got routed to the Atlanta Lenovo support center to get what I needed.

Thus late yesterday late afternoon when I called, the Atlanta Lenovo agent was able to take my laptop serial number, make and model, description of what part was needed all without transferring to other persons. They then made arrangements for not a new replacement key, rather an entire new keyboard with total phone time probably less than 15 minutes.

This morning by 10:30AM CT a box with the new replacement keyboard arrived. In-between calls and other work, in a matter of minutes the old keyboard was removed, the new one installed, tested and I now get to type normally instead of dealing with a broken Y key.

In less than 24 hours from making the call, UPS arrived back to pickup the old keyboard to return to the depot.

Here are some photos for you propeller (tech heads or geek’s) beginning with the X1 keyboard and broken key before the replacement.

Lenvo X1 keyboard replacement

The following shows the keyboard removed looking towards the screen with the key board flat cables still installed. Note that the small black connectors (two of them) flip-up and the cables slide out (or in for installation).
Lenvo X1 keyboard replacement

In this photo, you can see one of the two keyboard connectors, plus where the Samsung SSD I installed replaces the HDD that the X1 shipped with. Also shown are the Sierra wireless 4G card that I use while traveling that provides an alternative when others are trying to figure out how to use available public WiFi.
Lenvo X1 keyboard replacement

In this image, you can see the DRAM (e.g. memory) along with two connectors where the keyboard cables connect to before cables have been reconnected.
Lenvo X1 keyboard replacement

With the new cables connected, keyboard reinstalled and tested, the old key board has been boxed up up, return shipping sticker applied, UPS called and the box picked up, on its way back to Lenovo.
Lenvo X1 keyboard replacement

For that, Kudos to Lenovo for delivering on what in the past taken for granted as good customer service and support, however in these days, all to often is the exception.

Next time somebody asks why I use Lenovo ThinkPad’s guess what story I will tell them.

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Announcing SAS SANs for Dummies book, LSI edition

There is a new (free) book that I’m a co-author of along Bruce Grieshaber and Larry Jacob (both of LSI) along with foreword by Harry Mason of LSI and President of the SCSI Trade Association titled SAS SANs for Dummies compliments of LSI.

SAS SANs for Dummies, LSI Edition

This new book (ebook and print hard copy) looks at Serial Attached SCSI (SAS) and how it can be used beyond traditional direct attached storage (DAS) configurations for support various types of storage mediums including SSD, HDD and tape. These configuration options include as entry-level SAN with SAS switches for small clusters or server virtualization, or as shared DAS as well as being a scale out back-end solution for NAS, object, cloud and big data storage solutions.

Here is the table of contents (TOC) of SAS SANs for Dummies

Chapter 1: Data storage challenges

  • Storage Growth Demand Drivers
  • Recognizing Challenges
  • Solutions and Opportunities
  • Chapter 2: Storage Area Networks

  • Introducing Storage Area Networks
  • Moving from Dedicated Internal to Shared Storage
  • Chapter 3: SAS Basics

  • Introducing the Basics of SAS
  • How SAS Functions
  • Components of SAS
  • SAS Target Devices
  • SAS for SANs
  • Chapter 4: SAS Usage Scenarios

  • Understanding SAS SANs Usage
  • Shared SAS SANs Scenarios including:
    • SAS in HPC environments
    • Big data and big bandwidth
    • Database, e-mail, back-office
    • NAS and object storage servers
    • Cloud, wen and high-density
    • Server virtualization

    Chapter 5: Advanced SAS Topics

  • The SAS Physical Layer
  • Choosing SAS Cabling
  • Using SAS Switch Zoning
  • SAS HBA Target Mode
  • Chapter 6: Nine Common Questions

  • Can You Interconnect Switches?
  • What Is SAS Cable Distance?
  • How Many Servers Can Be In a SAS SAN?
  • How Do You Manage SAS Zones?
  • How Do You Configure SAS for HA?
  • How Does SAS Zoning Compare to LUN Mapping?
  • Who Has SAS Solutions?
  • How Do SAS SANs Compare?
  • Where Can You Learn More?
  • Chapter 7: Next Steps

  • SAS Going Forward
  • Next Steps
  • Great Take Away’s
  • Regardless of if you are looking to use SAS as a primary SAN interface, or leverage it for DAS or implementing back-end storage for big-data, NAS, object, cloud or other types of scalable storage solutions, check out and get your free copy of SAS SANs for Dummies here compliments of LSI.

    SAS SANs for Dummies, LSI Edition

    Click here to ask your free copy of SAS SANs for Dummies compliments of LSI, tell them Greg from StorageIO sent you and enjoy the book.

    Ok, nuff said.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Dell Storage Customer Advisory Panel (CAP)

    Dell Storage Customer Advisory Panel (CAP)

    Recently I was asked by Dell to moderate and host their North America storage customer advisory panel (CAP) session (twitter #storagecap) that followed their 2012 storage forum (see comments about 2011 storage forum here) event in Boston (Disclosure Dell covered my trip to Boston).

    This was an interesting event in many ways because it was a diverse group some of whom were long-time EqualLogic and Compellent (both before and post acquisition) customers of various size or customers of Dell who have yet to buy storage from them.

    Dell Storage Customer Advisory Panel (CAP)
    Click on above image for video feed

    Beyond the diversity of types of customers and their relationship with Dell, what also made this event interesting was that it was live streamed with professional produced video and audio in addition to twitter and other social media coverage. However what made the event even more interesting IMHO was the fact that being a live event (watch replay here) in video with audio as well as on twitter, the attendees were urged to speak freely with conversation among themselves providing feedback and commentary for Dell.

    Sure there were songs of praise when and were deserved, however unlike some made for social media vendor events that tend to be closer to sales pitches, this event also included some tough love feedback and comments for Dell, their products, services and events planner.


    Dell Storage CAP illustrators aka @ThinkLink

    Oh, did I mention that other than some members from the Dell social media team (@dell_storage) who were in the room to help facilitate and coordinate the event itself, the real discussions were free and independent of Dell employees (other than to remind not to avoid going into NDA land while live on the video and audio feed). Dell had @ThinkLink doing live illustrations capturing as images the discussion themes, topics and points of interests during the events that you can see examples of in the following images.

    Dell Storage Customer Advisory Panel (CAP)Dell Storage Customer Advisory Panel (CAP)Dell Storage Customer Advisory Panel (CAP)Dell Storage Customer Advisory Panel (CAP)Dell Storage Customer Advisory Panel (CAP)Dell Storage Customer Advisory Panel (CAP)

    Dell Flickr images from the Storage CAP session

    Kudos to Dell for having the courage, conviction and confidence to have a customer advisory panel event live streamed, that also allowed the attendees to speak their mind free of a script or talking points guide. The session included having each participant taking a turn of putting themselves in the general managers chair and saying what they would do, why, and how they would address customers and prospects.
    After all, its one thing to sit in the cheap seats, playing arm-chair quarterback saying what you want, it’s another saying why you need it, what the priority and impact are or would be and how to get the message to the customer. Some of the topics covered included Appassure for data protection, Compellent, EqualLogic and other recent acquisitions, products, service, support and community forums.

    Thanks to all who participated including @ThinkLink (illustrators), Dell Storage social media team (@dell_storage), Alison Krause (@AlisonDell), Gina Rosenthal (@gminks), Michelle Richard (@meesh_says) and particularly the participants Pete Koehler (@petergavink), Roger Lund (@rogerlund), Luigi Danakos (@nerdblurt), Dan Marbes (@danmarbes), Jeff Hengesbach (@jeffhengesbach), Steve Mickeler (@shmick), Ed Aractingi (@earactingi) and Dennis Heinle (@dheinle).

    Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Enabling Bitlocker on Microsoft Windows 7 Professional 64 bit

    Enabling Bitlocker on Microsoft Windows 7 Professional 64 bit
    Updated 6/24/18

    A while back, I added a new laptop that required Enabling Bitlocker on Microsoft Windows 7 Professional 64 bit. At that time some of my other devices run Windows 7 Ultimate 32 bit with Bitlocker security encryption enabled (since upgraded to various Windows 10 editions). However back then, I ran into a problem getting Bitlocker to work on the 64 bit version of Windows 7 Professional.

    Yes I know I should not be using Windows and I also have plenty of iDevices and other Apple products lying around. Likewise to the security pros and security arm-chair quarterbacks I know I should not be using Bitlocker, instead using Truecrypt of which I have done some testing and may migrate too in the future along with self-encrypting device (SED).

    However lets stay on track here ;).

    Lenovo Thinkpad X1 Gen6
    Image courtesy of Lenovo.com

    The problem that I ran into with my then new Lenovo X1 was that it came with Windows 7 Professional 64 bit, which has a few surprises when trying to turn on Bitlocker drive encryption. Initializing and turning on the Trusted Platform Module (TPM) management was not a problem, however for those needing to figure out how to do that, check out this Microsoft TechNet piece.

    The problem was as simple as not having a tab and easy way to enable Bitlocker Drive Encryption with Windows 7 Professional 64 bit. After spending some time searching around various Microsoft and other sites to figure out how to hack, patch, script and do other things that would take time (and time is money), it dawned on me. Could the solution to the problem be as simple as upgrading from the Professional version of Windows 7 bit to Windows 7 Ultimate?

    Update: 6/25/18

    While this post is about Windows 7, there are some new challenges with Windows 10 bit locker and removable devices including USB. These new issues are tied to Windows 10 running in BIOS instead of UEFI boot mode.

    Here are some additional Windows 10 Bitlocker related resources:

  • Via Microsoft: Bitlocker Frequently Asked Questions
  • Via Microsoft: Bitlocker Overview and Requirements
  • Via Intel: Converting Windows Installation from BIOS to UEFI
  • Microsoft Windows 7 via amazon
    Windows 7 image courtesy of Amazon.com

    The answer was going to the Microsoft store (or Amazon among other venues) and for $139.21 USD (with tax) purchase the upgrade.

    Once the transaction was complete, the update was automatically and within minutes I had Bitlocker activated on the Lenovo X1 (TPM was previously initiated and turned on), a new key was protected and saved elsewhere, and the internal Samsung 830 256GB Solid State Device (SSD) initializing and encrypting. Oh, fwiw, yes the encryption of the 256GB SSD took much less time than on a comparable Hard Disk Drive (HDD) or even an HHDD (Hybrid HDD).

    Could I have saved the $139.21 and spent some time on work around? Probably, however as I did not have the time or interest to go that route, however IMHO for my situation it was a bargain.

    Sometimes spending a little money particular if you are short on or value, your time can be a bargain as opposed to if you are short on money however long on time.

    I found the same to be true when I replaced the internal HDD that came with the Lenovo X1 with a Samsung 256GB SSD in that it improved my productivity for writing and saving data. For example in the first month of use I estimate easily 2 to three minutes of time saved per day waiting on things to be written to HDDs. In other words 2 to three minutes times five days (10 to 15 minutes) times four weeks (40 to 60 minutes) starts to add up (e.g. small amounts or percentages spread over a large interval add up), more on using and justifying SSD in a different post.

    Microsoft Windows 7 Ultimate

    Samsung SSD image courtesy of Amazon.com

    If your time is not of value or you have a lot of it, then the savings may not be as valuable. On the other hand, if you are short on time or have a value on your time, you can figure out what the benefits are quite quickly (e.g. return on investment or traditional ROI).

    Where To Learn More

    Learn more about Windows, Bitlocker and related topics

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    The reason I bring the topic of time and money into this discussion about Bitlocker is to make a point that there are situations where spending some time has value such as for learning, the experience, fun or simple entertainment aspect, not to mention a shortage of money. On the other hand, sometimes it is actually cheaper to spend some money to get to the solution or result as part of being productive or effective. For example, other than spending some time browsing various sites to figure out that there was an issue with Windows 7 Professional and Bitlocker, time that was educational and interesting, the money spent on the simple upgrade was worth it in my situations. While many if not most of you have since upgraded to Windows 8 or Windows 10, some may still have the need for Enabling Bitlocker on Microsoft Windows 7 Professional 64 bit.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Green IT deferral blamed on economic recession might be result of green gap

    Storage I/O Industry Trends and Perspectives

    I recently saw a comment somewhere that talked about Green IT being deferred or set aside due to lack of funding because of ongoing global economic turmoil. For those who see Green IT in the context of the green washing efforts that requiring spending to gain some benefits that I can understand. After all, if your goal is to simply go and be or be seen as being green, there is a cost to doing that.

    With tight or shrinking IT budgets, there are other realities and while organizations may want to do the right thing helping the environment, however that is often seen as overhead to financial conscious management.

    On the other hand, turn the green washing messaging off or at least dial-it back a bit as has been the case the past couple of years.

    Expand the Green IT discussion or change it around a bit from that of being seen or perceived as being green by energy efficiency or avoidance to that of effectiveness, enhanced productivity, doing more with what you have or with less and there is a different opportunity.

    That opportunity is to meet the financial and business goals or requirements that as a by-product help the environment. In other words, expand the focus of Green IT to that of economics and improving on resource effectiveness and the environment gets a free ride, or, Green gets self-funded.

    The Green and Virtual Data Center Book addressing optimization, effectivness, productivity and economics

    The challenge is what I refer to as the Green Gap, which is the disconnect between what is talked about (e.g. messaging) and thus perceived to be Green IT and where common IT opportunities exist (or missed opportunities have occurred).

    Green IT or at least the tenants of driving efficiency and effectiveness to use energy more effectively, address recycling and waste, removable of hazardous substance and other items continues to thrive. However, the green washing is subsiding and overtime organizations will not be as dismissive of Green IT in the context of improving productivity, reducing complexity and costs, optimization and related themes tied to economics where the environment gets a free ride.

    Here are some related links:
    Closing the Green Gap
    Energy efficient technology sales depend on the pitch
    EPA Energy Star for Data Center Storage Update
    Green IT Confusion Continues, Opportunities Missed!
    How to reduce your Data Footprint impact (Podcast)
    Optimizing storage capacity and performance to reduce your data footprint
    Performance metrics: Evaluating your data storage efficiency
    PUE, Are you Managing Power, Energy or Productivity?
    Saving Money with Green Data Storage Technology
    Saving Money with Green IT: Time To Invest In Information Factories
    Shifting from energy avoidance to energy efficiency
    Storage Efficiency and Optimization: The Other Green
    Supporting IT growth demand during economic uncertain times
    The new Green IT: Efficient, Effective, Smart and Productive
    The other Green Storage: Efficiency and Optimization
    The Green and Virtual Data Center Book (CRC Press, Intel Recommended Reading)

    Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    How much SSD do you need vs. want?

    Storage I/O Industry Trends and Perspectives

    I have been getting asked by IT customers, VAR’s and even vendors how much solid state device (SSD) storage is needed or should be installed to address IO performance needs to which my standard answer is it depends.

    I also am also being asked if there is rule of thumb (RUT) of how much SSD you should have either in terms of the number of devices or a percentage; IMHO, the answer is it depends. Sure, there are different RUTs floating around based on different environments, applications, workloads however are they applicable to your needs.

    What I would recommend is instead of focusing on percentages, RUTs, or other SWAG estimate’s or PIROMA calculations, look at your current environment and decide where the activity or issues are. If you know how many fast hard disk drives (HDD) are needed to get to a certain performance level and amount of used capacity that is a good starting point.

    If you do not have that information, use tools from your server, storage or third-party provider to gain insight into your activity to help size SSD. Also if you have a database environment and are not familiar with the tools, talk with your DBA’s to have them run some reports that show performance information the two of you can discuss to zero in hot spots or opportunity for SSD.

    Keep in mind when looking at SSD what is that you are trying to address by installing SSD. For example, is there a specific or known performance bottleneck resulting in poor response time or latency or is there a general problem or perceived opportunity?

    Storage I/O Industry Trends and Perspectives

    Is there a lack of bandwidth for large data transfers or is there a constraint on how many IO operations per second (e.g. IOPS) or transaction or activity that can be done in a given amount of time. In other words the more you know where or what the bottleneck is including if you can trace it back to a single file, object, database, database table or other item the closer you are to answering how much SSD you will need.

    As an example if using third-party tools or those provided by SSD vendors or via other sources you decide that your IO bottleneck are database transaction logs and system paging files, then having enough SSD space capacity to fit those in part of the solution. However, what happens when you remove the first set of bottlenecks, what new ones will appear and will you have enough space capacity on your SSD to accommodate the next in line hot spot?

    Keep in mind that you may want more SSD however what can you get budget approval to buy now without having more proof and a business case. Get some extra SSD space capacity to use for what you are confident can address other bottlenecks, or, enable new capabilities.

    On other hand if you can only afford enough SSD to get started, make sure you also protect it. If you decide that two SSD devices (PCIe cache or target cards, drives or appliances) will take care of your performance and capacity needs, make sure to keep availability in mind. This means having extra SSD devices for RAID 1 mirroring, replication or other form of data protection and availability. Keep in mind that while traditional hard disk drive (HDD) storage is often gauged on cost per capacity, or dollar per GByte or dollar per TByte, with SSD measure its value on cost to performance. For example, how many IOPS, or response time improvement or bandwidth are obtained to meet your specific needs per dollar spent.

    Related links
    What is the best kind of IO? The one you do not have to do
    Is SSD dead? No, however some vendors might be
    Speaking of speeding up business with SSD storage
    Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?
    Why SSD based arrays and storage appliances can be a good idea (Part I)
    EMC VFCache respinning SSD and intelligent caching (Part I)
    SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD

    Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Only you can prevent cloud data loss

    Storage I/O trends

    Some of you might remember the saying from Smokey the bear, only you can prevent forest fires and for those who do not know about that, click on the image below.

    The reason I bring this up is that while cloud providers are responsible (see the cloud blame game) is that it is also up to the user or consumer to take some ownership and responsibility.

    Similar to vendor lock-in, the only one who can allow vendor lock in is the customer, granted a vendor can help influence the customer.

    The same theme applies to public clouds and cloud storage providers in that there is responsibility of providers along with government and industry regulations to help protect consumers or users. However, there is also the shared responsibility of the user and consumer to make informed decisions.

    What is your perspective on who is responsible for cloud data protection?

    Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved