Seven databases in seven weeks, a book review of NoSQL databases

StorageIO industry trends cloud, virtualization and big data

Seven Databases in Seven Weeks (A Guide to Modern Databases and the NoSQL Movement) is a book written Eric Redmond (@coderoshi) and Jim Wilson (@hexlib), part of The Pragmatic Programmers (@pragprog) series that takes a look at several non SQL based database systems.

Cover image of seven databases in seven weeks book image

Coverage includes PostgreSQL, Riak, Apache HBase, MongoDB, Apache CouchDB, Neo4J and Redis with plenty of code and architecture examples. Also covered include relational vs. key value, columnar and document based systems among others.

The details: Seven Databases in Seven Weeks
Paperback: 352 pages
Publisher: Pragmatic Bookshelf (May 18, 2012)
Language: English
ISBN-10: 1934356921
ISBN-13: 978-1934356920
Product Dimensions: 7.5 x 0.8 x 9 inches

Buzzwords (or keywords) include availability, consistency, performance and related themes. Others include MongoDB, Cassandra, Redis, Neo4J, JSON, CouchDB, Hadoop, HBase, Amazon Dynamo, Map Reduce, Riak (Basho) and Postgres along with data models including relational, key value, columnar, document and graph along with big data, little data, cloud and object storage.

While this book is not a how to tutorial or installation guide, it does give a deep dive into the different databases covered. The benefit is gaining an understanding of what the different databases are good for, strengths, weakness, where and when to use or choose them for various needs.

Look inside seven databases in seven weeks book image
A look inside my copy of Seven Databases in Seven Days

Who should this book includes applications developers, programmers, Cloud, big data and IT/ICT architects, planners and designers along with database, server, virtualization and storage professionals. What I like about the book is that it is a great intro and overview along with sufficient depth to understand what these different solutions can and cannot do, when, where and why to use these tools for different situations in a quick read format and plenty of detail.

Would I recommend buying it: Yes, I bought a copy myself on Amazon.com, get your copy by clicking here.

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Garbage data in, garbage information out, big data or big garbage?

StorageIO industry trends cloud, virtualization and big data

Do you know the computer technology saying, garbage data in results in garbage information out?

In other words even with the best algorithms and hardware, bad, junk or garbage data put in results in garbage information delivered. Of course, you might have data analysis and cleaning software to look for, find and remove bad or garbage data, however that’s for a different post on another day.

If garbage data in results in garbage information out, does garbage big data in result in big garbage out?

I’m sure my sales and marketing friends or their surrogates will jump at the opportunity to tell me why and how big data is the solution to the decades old garbage data in problem.

Likewise they will probably tell me big data is the solution to problems that have not even occurred or been discovered yet, yeah right.

However garbage data does not discriminate or show preference towards big data or little data, in fact it can infiltrate all types of data and systems.

Lets shift gears from big and little data to how all of that information is protected, backed up, replicated, copied for HA, BC, DR, compliance, regulatory or other reasons. I wonder how much garbage data is really out there and many garbage backups, snapshots, replication or other copies of data exist? Sounds like a good reason to modernize data protection.

If we don’t know where the garbage data is, how can we know if there is a garbage copy of the data for protection on some other tape, disk or cloud. That also means plenty of garbage data to compact (e.g. compress and dedupe) to cut its data footprint impact particular with tough economic times.

Does this mean then that the cloud is the new destination for garbage data in different shapes or forms, from online primary to back up and archive?

Does that then make the cloud the new virtual garbage dump for big and little data?

Hmm, I think I need to empty my desktop trash bin and email deleted items among other digital house keeping chores now.

On the other hand, just had a thought about orphaned data and orphaned storage, however lets leave those sleeping dogs lay where they rest for now.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

SSD, flash and DRAM, DejaVu or something new?

StorageIO industry trends cloud, virtualization and big data

Recently I was in Europe for a couple of weeks including stops at Storage Networking World (SNW) Europe in Frankfurt, StorageExpo Holland, Ceph Day in Amsterdam (object and cloud storage), and Nijkerk where I delivered two separate 2 day, and a single 1 day seminar.

Image of Frankfurt transtationImage of inside front of ICE train going from Frankfurt to Utrecht

At the recent StorageExpo Holland event in Utrecht, I gave a couple of presentations, one on cloud, virtualization and storage networking trends, the other taking a deeper look at Solid State Devices (SSD’s). As in the past, StorageExpo Holland was great in a fantastic venue, with many large exhibits and great attendance which I heard was over 6,000 people over two days (excluding exhibitor vendors, vars, analysts, press and bloggers) which was several times larger than what was seen in Frankfurt at the SNW event.

Image of Ilja Coolen (twitter @@iCoolen) who was session host for SSD presentation in UtrechtImage of StorageExpo Holland exhibit show floor in Utrecht

Both presentations were very well attended and included lively interactive discussion during and after the sessions. The theme of my second talk was SSD, the question is not if, rather what to use where, how and when which brings us up to this post.

For those who have been around or using SSD for more than a decade outside of cell phones, camera, SD cards or USB thumb drives, that probably means DRAM based with some form of data persistency mechanisms. More recently mention SSD and that implies nand flash-based, either MLC or eMLC or SLC or perhaps emerging mram or PCM. Some might even think of NVRAM or other forms of SSD including emerging mram or mem-resistors among others, however lets stick to nand flash and dram for now.

image of ssd technology evolution

Often in technology what is old can be new, what is new can be seen as old, if you have seen, experienced or done something before you will have a sense of DejaVu and it might be evolutionary. On the other hand, if you have not seen, heard, experienced, or found a new audience, then it can be  revolutionary or maybe even an industry first ;).

Technology evolves, gets improved on, matures, and can often go in cycles of adoption, deployment, refinement, retirement, and so forth. SSD in general has been an on again, off again type cycle technology for the past several decades except for the past six to seven years. Normally there is an up cycle tied to different events, servers not being fast enough or affordable so use SSD to help address performance woes, or drives and storage systems not being fast enough and so forth.

Btw, for those of you who think that the current SSD focused technology (nand flash) is new, it is in fact 25 years old and still evolving and far from reaching its full potential in terms of customer deployment opportunities.

StorageIO industry trends cloud, virtualization and big data

Nand flash memory has helped keep SSD practical for the past several years riding the similar curve that is keeping hard disk drives (HDD’s) that they were supposed  to replace alive. That is improved reliability, endurance or duty cycle, better annual failure rate (AFR), larger space capacity, lower cost, and enhanced interfaces, packaging, power and functionality.

Where SSD can be used and options

DRAM historically at least for enterprise has been the main option for SSD based solutions using some form of data persistency. Data persistency options include battery backup combined with internal HDD’s to de stage information from the DRAM before power was lost. TMS (recently bought by IBM) was one of the early SSD vendors from the DRAM era that made the transition to flash including being one of the first many years ago to combine DRAM as a cache layer over nand flash as a persistency or de-stage layer. This would be an example of if you were not familiar with TMS back then and their capacities, you might think or believe that some more recent introductions are new and revolutionary, and perhaps they are in their own right or with enough caveats and qualifiers.

An emerging trend, which for some will be Dejavu, is that of using more DRAM in combination with nand flash SSD.

Oracle is one example of a vendor who IMHO rather quietly (intentionally or accidentally) has done this in the 7000 series storage systems as well as ExaData based database storage systems. Rest assured they are not alone and in fact many of the legacy large storage vendors have also piled up large amounts of DRAM based cache in their storage systems. For example EMC with 2TByte of DRAM cache in their VMAX 40K, or similar systems from Fujitsu HP, HDS, IBM and NetApp (including recent acquisition of DRAM based CacheIQ) among others. This has also prompted the question of if SSD has been successful in traditional storage arrays, systems or appliances as some would have you believe not, click here to learn more and cast your vote.

SSD, IO, memory and storage hirearchy

So is the future in the past? Some would say no, some will say yes, however IMHO there are lessons to learn and leverage from the past while looking and moving forward.

Early SSD’s were essentially RAM disks, that is a portion of main random access memory (RAM) or what we now call DRAM set aside as a non persistent (unless battery backed up) cache or device. Using a device driver, applications could use the RAM disk as though it were a normal storage system. Different vendors springing up with drivers for various platforms and disappeared as their need were reduced with faster storage systems, interfaces and ram disks drives supplied by vendors, not to mention SSD devices.

Oh, for you tech trivia types, there was also database machines from the late 80’s such as Briton Lee that would offload your database processing functions to a specialized appliance. Sound like Oracle ExaData  I, II or III to anybody?

Image of Oracle ExaData storage system

Ok, so we have seen this movie before, no worries, old movies or shows get remade, and unless you are nostalgic or cling to the past, sure some of the remakes are duds, however many can be quite good.

Same goes with the remake of some of what we are seeing now. Sure there is a generation that does not know nor care about the past, its full speed ahead and leverage what will get them there.

Thus we are seeing in memory databases again, some of you may remember the original series (pick your generation, platform, tool and technology) with each variation getting better. With 64 bit processor, 128 bit and beyond file system and addressing, not to mention ability for more DRAM to be accessed directly, or via memory address extension, combined with memory data footprint reduction or compression, there is more space to put things (e.g. no such thing as a data or information recession).

Lets also keep in mind that the best IO is the IO that you do not have to do, and that SSD which is an extension of the memory map plays by the same rules of real estate. That is location matters.

Thus, here we go again for some of you (DejaVu), while for others get ready for a new and exciting ride (new and revolutionary). We are back to the future with in memory database which while for a time will take some pressure from underlying IO systems until they once again out grow server memory addressing limits (or IT budgets).

However for those who do not fall into a false sense of security, no fear, as there is no such thing as a data or information recession. Sure as the sun rises in the east and sets in the west, sooner or later those IO’s that were or are being kept in memory will need to be de-staged to persistent storage, either nand flash SSD, HDD or somewhere down the road PCM, mram and more.

StorageIO industry trends cloud, virtualization and big data

There is another trend that with more IOs being cached, reads are moving to where they should resolve which is closer to the application or via higher up in the memory and IO pyramid or hierarchy (shown above).

Thus, we could see a shift over time to more writes and ugly IOs being sent down to the storage systems. Keep in mind that any cache historically provides temporal relieve, question is how long of a temporal relief or until the next new and revolutionary or DejaVu technology shows up.

Ok, go have fun now, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Is SSD only for performance?

StorageIO industry trends cloud, virtualization and big data

Normally solid state devices (SSD) including non-persistent DRAM, and persistent nand flash are thought of in the context of performance including bandwidth or throughput, response time or latency, and IOPS or transactions. However there is another role where SSD are commonly used where the primary focus is not performance. Besides consumer devise such as iPhones, iPads, iPods, Androids, MP3, cell phones and digital cameras, the other use is for harsh environments.

Harsh environments include those (both commercial and government) where use of SSDs are a solution to vibration or other rough handling. These include commercial and military aircraft, telemetry and mobile command, control and communications, energy exploration among others.

What’s also probably not commonly thought about is that the vendors or solution providers for the above specialized environments include mainstream vendors including IBM (via their TMS acquisition) and EMC among others. Yes, EMC is involved with deploying SSD in different environments including all nand flash-based VNX systems.

In a normal IT environment, vibration should not be an issue for storage devices assuming quality solutions with good enclosures are used. However some environments that are pushing the limits on density may become more susceptible to vibration. Not all of those use cases will be SSD opportunities, however some that can leverage IO density along with tolerance to vibration will be a good fit.

Does that mean HDDs can not or should not be used in high density environments where vibration can be an issue?

That depends.

If the right drive enclosures, type of drive are used  following manufactures recommendations, then all should be good. Keep in mind that there are many options to leverage SSD for various scenarios.

Which tool or technology to use when, where or how much will depend on the specific situation, or perhaps your preferences for a given product or approach.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Data Center Infrastructure Management (DCIM) and IRM

StorageIO industry trends cloud, virtualization and big data

There are many business drivers and technology reasons for adopting data center infrastructure management (DCIM) and infrastructure Resource Management (IRM) techniques, tools and best practices. Today’s agile data centers need updated management systems, tools, and best practices that allow organizations to plan, run at a low-cost, and analyze for workflow improvement. After all, there is no such thing as an information recession driving the need to move process and store more data. With budget and other constraints, organizations need to be able to stretch available resources further while reducing costs including for physical space and energy consumption.

The business value proposition of DCIM and IRM includes:

DCIM, Data Center, Cloud and storage management figure

Data Center Infrastructure Management or DCIM also known as IRM has as their names describe a focus around management resources in the data center or information factory. IT resources include physical floor and cabinet space, power and cooling, networks and cabling, physical (and virtual) servers and storage, other hardware and software management tools. For some organizations, DCIM will have a more facilities oriented view focusing on physical floor space, power and cooling. Other organizations will have a converged view crossing hardware, software, facilities along with how those are used to effectively deliver information services in a cost-effective way.

Common to all DCIM and IRM practices are metrics and measurements along with other related information of available resources for gaining situational awareness. Situational awareness enables visibility into what resources exist, how they are configured and being used, by what applications, their performance, availability, capacity and economic effectiveness (PACE) to deliver a given level of service. In other words, DCIM enabled with metrics and measurements that matter allow you to avoid flying blind to make prompt and effective decisions.

DCIM, Data Center and Cloud Metrics Figure

DCIM comprises the following:

  • Facilities, power (primary and standby, distribution), cooling, floor space
  • Resource planning, management, asset and resource tracking
  • Hardware (servers, storage, networking)
  • Software (virtualization, operating systems, applications, tools)
  • People, processes, policies and best practices for management operations
  • Metrics and measurements for analytics and insight (situational awareness)

The evolving DCIM model is around elasticity, multi-tenant, scalability, flexibility, and is metered and service-oriented. Service-oriented, means a combination of being able to rapidly give new services while keeping customer experience and satisfaction in mind. Also part of being focused on the customer is to enable organizations to be competitive with outside service offerings while focusing on being more productive and economic efficient.

DCIM, Data Center and Cloud E2E management figure

While specific technology domain areas or groups may be focused on their respective areas, interdependencies across IT resource areas are a matter of fact for efficient virtual data centers. For example, provisioning a virtual server relies on configuration and security of the virtual environment, physical servers, storage and networks along with associated software and facility related resources.

You can read more about DCIM, ITSM and IRM in this white paper that I did, as well as in my books Cloud and Virtual Data Storage Networking (CRC Press) and The Green and Virtual Data Center (CRC Press).

Ok, nuff said, for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

IBM vs. Oracle, NAD intervenes, again

StorageIO industry trends cloud, virtualization and big data

With HP announcing that they were sold a bogus deal with Autonomy (read here, here and here among others) and the multi billion write off (loss), or speculation of who will be named the new CEO of Intel in 2013, don’t worry if you missed the latest in the ongoing IBM vs. Oracle campaign. The other day the NAD (National Advertising Directive) part of the Better Business Bureau (BBB) issued yet another statement about IBM and Oracle (read here and posted below).

NAD BBB logo

In case you had not heard, earlier this year, Oracle launched an advertising promotion touting how much faster their solutions are vs. IBM. Perhaps you even saw the advertising billboards along highways or in airports making the Oracle claims.

Big Blue (e.g. IBM) being the giant that they are was not going take the Oracle challenge sitting down and stepped up and complained to the better business bureau (BBB). As a result, the NAD issued a decision for Oracle to stop the ads (read more here). Oracle at 37.1B (May 2012 annual earnings) is about a third the size of IBM at 106.9B (2011 earnings), thus neither is exactly a small business.

Lets get back to the topic at hand the NAD issued yet another directive. In the latest spat, after the first Ads, Oracle launched the 10M challenge (you can read about that here).

Oracle 10 million dollar challenge ad image

Once again the BBB and the NAD weighs in for IBM and issued the following statement (mentioned above):

For Immediate Release
Contact: Linda Bean
212.705.0129

NAD Determines Oracle Acted Properly in Discontinuing Performance Claim Couched in ‘Contest’ Language

New York, NY – Nov. 20, 2012 – The National Advertising Division has determined that Oracle Corporation took necessary action in discontinuing advertising that stated its Exadata server is “5x Faster Than IBM … Or you win $10,000,000.”

The claim, which appeared in print advertising in the Wall Street Journal and other major newspapers, was challenged before NAD by International Business Machines Corporation.

NAD is an investigative unit of the advertising industry system of self-regulation and is administered by the Council of Better Business Bureaus.

As an initial matter, NAD considered whether or not Oracle’s advertisement conveyed a comparative performance claim – or whether the advertisement simply described a contest.

In an NAD proceeding, the advertiser is obligated to support all reasonable interpretations of its advertising claims, not just the message it intended to convey. In the absence of reliable consumer perception evidence, NAD uses its judgment to determine what implied messages, if any, are conveyed by an advertisement.

Here, NAD found that, even accounting for a sophisticated target audience, a consumer would be reasonable to take away the message that all Oracle Exadata systems run five times as fast as all IBM’s Power computer products. NAD noted in its decision that the fact that the claim was made in the context of a contest announcement did not excuse the advertiser from its obligation to provide substantiation.

The advertiser did not provide any speed performance tests, examples of comparative system speed superiority or any other data to substantiate the message that its Exadata computer systems run data warehouses five times as fast as IBM Power computer systems.

Accordingly, NAD determined that the advertiser’s decision to permanently discontinue this advertisement was necessary and appropriate. Further, to the extent that Oracle reserves the right to publish similar advertisements in the future, NAD cautioned that such performance claims require evidentiary support whether or not the claims are couched in a contest announcement.

Oracle, in its advertiser’s statement, said it disagreed with NAD’s findings, but would take “NAD’s concerns into account should it disseminate similar advertising in the future.”

###

NAD’s inquiry was conducted under NAD/CARU/NARB Procedures for the Voluntary Self-Regulation of National Advertising. Details of the initial inquiry, NAD’s decision, and the advertiser’s response will be included in the next NAD/CARU Case Report.

About Advertising Industry Self-Regulation: The Advertising Self-Regulatory Council establishes the policies and procedures for advertising industry self-regulation, including the National Advertising Division (NAD), Children’s Advertising Review Unit (CARU), National Advertising Review Board (NARB), Electronic Retailing Self-Regulation Program (ERSP) and Online Interest-Based Advertising Accountability Program (Accountability Program.) The self-regulatory system is administered by the Council of Better Business Bureaus.

Self-regulation is good for consumers. The self-regulatory system monitors the marketplace, holds advertisers responsible for their claims and practices and tracks emerging issues and trends. Self-regulation is good for advertisers. Rigorous review serves to encourage consumer trust; the self-regulatory system offers an expert, cost-efficient, meaningful alternative to litigation and provides a framework for the development of a self-regulatory to emerging issues.

To learn more about supporting advertising industry self-regulation, please visit us at: www.asrcreviews.org.

Linda Bean Director, Communications,
Advertising Self-Regulatory Council

Tel: 212.705.0129
Cell: 908.812.8175
lbean@asrc.bbb.org

112 Madison Ave.
3rd Fl.
New York, NY
10016

Not surprisingly, IBM sent the following email to highlight their latest news:

Greg,

For the third time in eight months Oracle has agreed to kill a misleading advertisement targeting IBM after scrutiny from the Better Business Bureau’s National Advertising Division.

Oracle’s ‘$10 Million Challenge’ ad claimed that its Exadata server was ‘Five Times Faster than IBM Power or You Win $10,000,000.’ The advertising council just issued a press release announcing that the claim was not supported by the evidence in the record, and that Oracle has agreed to stop making the claim. ‘[Oracle] did not provide speed performance tests, examples of comparative systems speed superiority or any other data to  substantiate its message,’ the BBB says in the release: The ads ran in The Wall Street Journal, The Economist, Chief Executive Magazine, trade publications and online.

The National Advertising Division reached similar judgments against Oracle advertising on two previous occasions this year. Lofty and unsubstantiated claims about Oracle systems being ‘Twenty Times Faster than IBM’ and ‘Twice as Fast Running Java’ were both deemed to be unsubstantiated and misleading. Oracle quietly shelved both campaigns.

If you follow Oracle’s history of claims, you won’t be surprised that the company issues misleading ads until they’re called out in public and forced to kill the campaign. As far back as 2001, Oracle’s favorite tactic has been to launch unsubstantiated attacks on competitors in ads while promising prize money to anyone who can disprove the bluff. Not surprisingly, no prize money is ever paid as the campaigns wither under scrutiny. They are designed to generate publicity for Oracle, nothing more. You may be familiar with their presentation, ‘Ridding the Market of Competition,’ which they issued to the Society of Competitive Intelligence Professionals laying out their strategy.

The repeated rulings by the BBB even caused analyst Rob Enderle to comment that, ‘there have been significant forced retractions and it is also apparent that increasingly the only people who could cite these false Oracle performance advantages with a straight face were Oracle’s own executives, who either were too dumb to know they were false or too dishonest to care.’

Let me know if you’re interested in following up on this news. You won’t hear anything about it from Oracle.

Best,

Chris

Christopher Rubsamen
Worldwide Communications for PureSystems and Cloud Computing
IBM Systems & Technology Group
aim: crubsamen
twitter: @crubsamen

Wow, I never knew however I should not be surprised that there is a Society of Competitive Intelligence Professionals.

Now Oracle is what they are, aggressive and have a history of doing creative or innovative (e.g. stepping out-of-bounds) in sales and marketing campaigns, benchmarking and other activities. On the other hand has IBM been victimized at the hands of Oracle and thus having to resort to using the BBB and NAD as part of its new sales and marketing tool to counter Oracle?

Does anybody think that the above will cause Oracle to retreat, repent, and tone down how they compete on the field of sales and marketing of servers, storage, database and related IT, ICT, big and little data, clouds?

Anyone else have a visual of a group of IBMers sitting around a table at an exclusive country club enjoying a fine cigar along with glass of cognac toasting each other on their recent success in having the BBB and NAD issue another ruling against Oracle. Meanwhile perhaps at some left coast yacht club, the Oracle crew are high fiving, congratulating each other on their commission checks while spraying champagne all over the place like they just won the Americas cup race?

How about it Oracle, IBM says Im not going to hear anything from you, is that true?

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Podcast: vBrownbags, vForums and VMware vTraining with Alastair Cooke

Now also available via

This is a new episode in the continuing StorageIO industry trends and perspectives pod cast series (you can view more episodes or shows along with other audio and video content here) as well as listening via iTunes or via your preferred means using this RSS feed (https://storageio.com/StorageIO_Podcast.xml)

StorageIO industry trends cloud, virtualization and big data

In this episode, we go virtual, both with the topic (virtualization) and communicating around the world via Skype. My guest is Alastair Cooke (@DemitasseNZ) who joins me from New Zealand to talk about VMware education, training and social networking. Some of the topics that we cover include vForums, vBrownbags, VMware VCDX certification, VDI, Autolab, Professional vBrownbag tech talks, coffee and more. If you are into server virtualization or virtual desktop infrastructures (VDI), or need to learn more, Alastair talks about some great resources. Check out Alastairs site www.demitasse.co.nz for more information about the AutoLab, VMware training and education, along with the vBrownbag podcasts that are also available on iTunes as well as the APAC Virtualisation podcasts.

Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Alastair and myself.

StorageIO podcast

Also available via

Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

Enjoy this episode vBrownbags, vForums and VMware vTraining with Alastair Cooke.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

SSD past, present and future with Jim Handy

Now also available via

This is a new episode in the continuing StorageIO industry trends and perspectives pod cast series (you can view more episodes or shows along with other audio and video content here) as well as listening via iTunes or via your preferred means using this RSS feed (https://storageio.com/StorageIO_Podcast.xml)

StorageIO industry trends cloud, virtualization and big data

In this episode, I talk with SSD nand flash and DRAM chip analyst Jim Handy of Objective Analysis at the LSI AIS (Accelerating Innovation Summit) 2012 in San Jose. Our conversation includes SSD past, present and future, market and industry trends, who are doing what and things to keep an eye and ear, open for along with server, storage and memory convergence.

Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Jim and myself.

StorageIO podcast

Also available via

Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

Enjoy this episode SSD Past, Present and Future with Jim Handy.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Have SSDs been unsuccessful with storage arrays (with poll)?

Storage I/O Industry Trends and Perspectives

I hear people talking about how Solid State Devices (SSDs) have not been successful with or for vendors of storage arrays, particular legacy storage systems. Some people have also asserted that large storage arrays are dead at the hands of new purpose-built SSD appliances or storage systems (read more here).

As a reference, legacy storage systems include those from EMC (VMAX and VNX), IBM (DS8000, DCS3700, XIV, and V7000), and NetApp FAS along with those from Dell, Fujitsu, HDS, HP, NEC and Oracle among others.

Granted EMC have launched new SSD based solutions in addition to buying startup eXtremeIO (aka Project X), and IBM bought SSD industry veteran TMS. IMHO, neither of those actions by either vendor signals an early retirement for their legacy storage solutions, instead opening up new markets giving customers more options for addressing data center and IO performance challenges. Keep in mind that the best IO is the one that you do not have to do with the second best being the least impact to applications in a cost-effective way.

SSD, IO, memory and storage hirearchy

Sometimes I even hear people citing or using some other person or source to attribute or make their assertions sound authoritative. You know the game, according to XYZ or, ABC said blah blah blah blah. Of course if you say or repeat something often enough, or hear it again and again, it can become self-convincing (e.g. industry adoption vs. customer deployments). Likewise depending on how many degrees of separation exists between you and the information you get, the more that it can change from what it originally was.

So what about it, has SSD not been successful for legacy storage system vendors and is the only place that SSD has had success is with startups or non-array based solutions?

While there have been some storage systems (arrays and appliances) that may not perform up to their claimed capabilities due to various internal architecture or implementation bottlenecks. For the most part the large vendors including EMC, HP, HDS, IBM, NetApp and Oracle have done very well shipping SSD drives in their solutions. Likewise some of the clean sheet new design based startup systems, as well as some of the startups with hybrid solutions combing HDDs  and SSDs have done well while others are still emerging.

Where SSD can be used and options

This could also be an example where myth becomes reality based on industry adoption vs. customer deployment. What this means is that the myth is that it is the startups that are having success vs. the legacy vendors from an industry adoption conversation standpoint and thus believed by some.

On the other hand, the myth is that vendors such as EMC or NetApp have not had success with their arrays and SSD yet their customer deployments prove otherwise. There is also a myth that only PCIe based SSD can be of value and that drive based SSDs are not worth using which I have a good idea where that myth comes from.

IMHO it is a depends, however safe to say from what I have seen directly that there are some vendors of storage arrays, including so-called legacy systems that have had very good success with SSD. Likewise have seen where some startups have done ok with their new clean sheet designs, including EMC (Project X). Oh, at least for now I am not a believer that with the all SSD based project “X” over at EMC that the venerable VMAX  formerly known as DMX and its predecessors Symmetric have finally hit the end of the line. Rather they will be positioned and play to different markets for some time yet.

Over at IBM I don’t think the DS8000 or XIV or V7000 and SVC folks are winding things down now that they bought SSD vendor TMS who has SSD appliances and PCIe cards. Rest assured there have been success by PCIe flash card vendors both as targets (FusionIO) and cache or hybrid cache and target systems such as those from Intel, LSI, Micron, and TMS (now IBM) among others. Oh, and if you have not noticed, check out what Qlogic, Emulex and some of the other traditional HBA vendors have done with and around SSD caching.

So where does the FUD that storage systems have not had success with SSD come from?

I suspect from those who would rather not see or hear about those who have had success taking away attention from them or their markets. In other words, using Fear, Uncertainty and Doubt (FUD) or some community peer pressure, there is a belief by some that if you hear enough times that something is dead or not of a benefit; you will look at the alternatives.

Care to guess what the preferred alternative is for some? If you guessed a PCIe card or SSD based appliance from your favorite startup that would be a fair assumption.

On the other hand, my educated guess (ok, its much more informed than a guess ;) ) is that if you ask a vendor such as EMC or NetApp they would disagree, while at the same time articulate benefits of different approaches and tools. Likewise, my educated guess is that if you ask some others, they will say mixed things and of course if you talk with the pure plays, take a wild yet educated guess what they will say.

Here is my point.

SSD, DRAM, PCM and storage adoption timeline

The SSD market, including DRAM, nand flash (SLC or MLC or any other xLC), emerging PCM or future mram among other technologies and packaging options is still in its relative infancy. Yes, I know there have been significant industry adoption and many early customer deployments, however talking with IT organizations of all size as well as with vendors and vars, customer deployment of SSD is far from reaching its full potential meaning a bright future.

Simply putting an SSD, card or drive into a solution does not guarantee results.

Likewise having a new architecture does not guarantee things will be faster.

Fast storage systems need fast devices (HDD, HHDD and SSDs) along with fast interfaces to connect with fast servers. Put a fast HDD, HHDD or SSD into a storage system that has bottlenecks (hardware, software, architectural design) and you may not see the full potential of the technology. Likewise put fast ports or interfaces on a storage system that has fast devices however also a bottleneck in its controller has or system architecture and you will not realize the full potential of that solution.

This is not unique to legacy or traditional storage systems, arrays or appliances as it is also the case with new clean sheet designs.

There are many new solutions that are or should be as fast as their touted marketing stories present, however just because something looks impressive in a YouTube video or slide deck or WebEx does not mean it will be fast in your environment. Some of these new design SSD based solutions will displace some legacy storage systems or arrays while many others will find new opportunities. Similar to how previous generation SSD storage appliances found roles complementing traditional storage systems, so to will many of these new generation of products.

What this all means is to navigate your way through the various marketing and architecture debates, benchmarks battles, claims and counter claims to understand what fits your needs and requires.

StorageIO industry trends cloud, virtualization and big data

What say you?

Ok, nuff said

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Mr. Backup (Curtis Preston) goes back to Ceph School

Now also available via

This is a new episode in the continuing StorageIO industry trends and perspectives pod cast series (you can view more episodes or shows along with other audio and video content here) as well as listening via iTunes or via your preferred means using this RSS feed (https://storageio.com/StorageIO_Podcast.xml)

StorageIO industry trends cloud, virtualization and big data

In this episode, I am at the Ceph day in Amsterdam Holland event at the Tobacco Theatre hosted by on42.com and inktank.com.

Ceph Day Amsterdam 2012

My guest for this episode is Curtis (Mr. Backup) Preston (@wcpreston) of Backup School and Backup Central fame where we discuss what is Ceph and object storage, cloud storage, file systems, backup and data protection along with dinner we had at an Indonesian restaurant .

Dinner Restaurant Blauw Utrecht Netherlands
Mr Backup getting ready to compress and dedupe dinner

The dinner we are referring to was at Restaurant Blauw in Utrecht Holland (click here) where Curtis and me were joined by Hans De Leenher @hansdeleenher of Veeam (thanks again for the dinner, that was a disclosure btw ;) ).

Note that this is a special episode in that while I’m recording the pod cast, Curtis is recording a video of our discussion for his truebit.tv site that you can view here.

Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Curtis and myself.

StorageIO podcast

Also available via

Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

Also check out the companion to this pod cast where I meet up with Ceph Creator Sage Weil while at Ceph Day.

Enjoy this episode Mr. Backup (Curtis Preston) goes back to Ceph School.

 

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Ben Woo on Big Data Buzzword Bingo and Business Benefits

Now also available via

This is a new episode in the continuing StorageIO industry trends and perspectives pod cast series (you can view more episodes or shows along with other audio and video content here) as well as listening via iTunes or via your preferred means using this RSS feed (https://storageio.com/StorageIO_Podcast.xml)

StorageIO industry trends cloud, virtualization and big data

In this episode, In this episode, Im joined in Frankfurt Germany by Ben Woo (@benwoony) of Neuralytix.com. Our conversation includes cloud; big data and how buzzword bingo technology focused discussions can result in missed business benefits for both vendors and customers. We also reminisce about MTI where we worked together along with protecting home storage.

Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Ben and myself.

StorageIO podcast

Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

Enjoy this episode with Ben Woo talking big data and business benefits vs. buzzword bingo.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Ceph Day in Amsterdam and Sage Weil on Object Storage

Now also available via

This is a new episode in the continuing StorageIO industry trends and perspectives pod cast series (you can view more episodes or shows along with other audio and video content here) as well as listening via iTunes or via your preferred means using this RSS feed (https://storageio.com/StorageIO_Podcast.xml)

StorageIO industry trends cloud, virtualization and big data

In this episode, I am at the Ceph day in Amsterdam Holland event at the Tobacco Theatre. My guest for this episode is Ceph (Cephalanthera) creator Sage Weil who is also the founder of inktank.com that provides services and support for the open source based Ceph project.

For those not familiar with Ceph, it is an open source distributed object scale out software platform that can be used for deploying cloud and managed services, general purpose storage for research, commercial, scientific, high performance computing (HPC) or high productivity computing (commercial) along with backup or data protection and archiving destinations.

During our conversation Sage presents an overview of what Ceph is (e.g. Ceph for non Dummies), where and how it can be used, some history of the project and how it fits in with or provides an alternative to other solutions. Sage also talks about the business or commercial considerations for open source based projects, importance of community and having good business mentors and partners as well as staying busy with his young family.

If you are a Ceph fan, gain more insight into Sage along with Ceph day sponsors Inktank and 42on. On the other hand, if you new to object storage, open source storage software or cloud storage, listen in to gain perspectives of where technology such as Ceph fits for public, private, hybrid or traditional environments.

Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Sage and myself.

StorageIO podcast

Also available via

Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

Enjoy this episode Ceph Day in Amsterdam with Sage Weil.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Little data, big data and very big data (VBD) or big BS?

StorageIO industry trends cloud, virtualization and big data

This is an industry trends and perspective piece about big data and little data, industry adoption and customer deployment.

If you are in any way associated with information technology (IT), business, scientific, media and entertainment computing or related areas, you may have heard big data mentioned. Big data has been a popular buzzword bingo topic and term for a couple of years now. Big data is being used to describe new and emerging along with existing types of applications and information processing tools and techniques.

I routinely hear from different people or groups trying to define what is or is not big data and all too often those are based on a particular product, technology, service or application focus. Thus it should be no surprise that those trying to police what is or is not big data will often do so based on what their interest, sphere of influence, knowledge or experience and jobs depend on.

Traveling and big data images

Not long ago while out traveling I ran into a person who told me that big data is new data that did not exist just a few years ago. Turns out this person was involved in geology so I was surprised that somebody in that field was not aware of or working with geophysical, mapping, seismic and other legacy or traditional big data. Turns out this person was basing his statements on what he knew, heard, was told about or on sphere of influence around a particular technology, tool or approach.

Fwiw, if you have not figured out already, like cloud, virtualization and other technology enabling tools and techniques, I tend to take a pragmatic approach vs. becoming latched on to a particular bandwagon (for or against) per say.

Not surprisingly there is confusion and debate about what is or is not big data including if it only applies to new vs. existing and old data. As with any new technology, technique or buzzword bingo topic theme, various parties will try to place what is or is not under the definition to align with their needs, goals and preferences. This is the case with big data where you can routinely find proponents of Hadoop and Map reduce position big data as aligning with the capabilities and usage scenarios of those related technologies for business and other forms of analytics.

SAS software for big data

Not surprisingly the granddaddy of all business analytics, data science and statistic analysis number crunching is the Statistical Analysis Software (SAS) from the SAS Institute. If these types of technology solutions and their peers define what is big data then SAS (not to be confused with Serial Attached SCSI which can be found on the back-end of big data storage solutions) can be considered first generation big data analytics or Big Data 1.0 (BD1 ;) ). That means Hadoop Map Reduce is Big Data 2.0 (BD2 ;) ;) ) if you like, or dislike for that matter.

Funny thing about some fans and proponents or surrogates of BD2 is that they may have heard of BD1 like SAS with a limited understanding of what it is or how it is or can be used. When I worked in IT as a performance and capacity planning analyst focused on servers, storage, network hardware, software and applications I used SAS to crunch various data streams of event, activity and other data from diverse sources. This involved correlating data, running various analytic algorithms on the data to determine response times, availability, usage and other things in support of modeling, forecasting, tuning and trouble shooting. Hmm, sound like first generation big data analytics or Data Center Infrastructure Management (DCIM) and IT Service Management (ITSM) to anybody?

Now to be fair, comparing SAS, SPSS or any number of other BD1 generation tools to Hadoop and Map Reduce or BD2 second generation tools is like comparing apples to oranges, or apples to pears.

Lets move on as there is much more to what is big data than simply focus around SAS or Hadoop.

StorageIO industry trends cloud, virtualization and big data

Another type of big data are the information generated, processed, stored and used by applications that result in large files, data sets or objects. Large file, objects or data sets include low resolution and high-definition photos, videos, audio, security and surveillance, geophysical mapping and seismic exploration among others. Then there are data warehouses where transactional data from databases gets moved to for analysis in systems such as those from Oracle, Teradata, Vertica or FX among others. Some of those other tools even play (or work) in both traditional e.g. BD1 and new or emerging BD2 worlds.

This is where some interesting discussions, debates or disagreements can occur between those who latch onto or want to keep big data associated with being something new and usually focused around their preferred tool or technology. What results from these types of debates or disagreements is a missed opportunity for organizations to realize that they might already be doing or using a form of big data and thus have a familiarity and comfort zone with it.

By having a familiarity or comfort zone vs. seeing big data as something new, different, hype or full of FUD (or BS), an organization can be comfortable with the term big data. Often after taking a step back and looking at big data beyond the hype or fud, the reaction is along the lines of, oh yeah, now we get it, sure, we are already doing something like that so lets take a look at some of the new tools and techniques to see how we can extend what we are doing.

Likewise many organizations are doing big bandwidth already and may not realize it thinking that is only what media and entertainment, government, technical or scientific computing, high performance computing or high productivity computing (HPC) does. I’m assuming that some of the big data and big bandwidth pundits will disagree, however if in your environment you are doing many large backups, archives, content distribution, or copying large amounts of data for different purposes that consume big bandwidth and need big bandwidth solutions.

Yes I know, that’s apples to oranges and perhaps stretching the limits of what is or can be called big bandwidth based on somebody’s definition, taxonomy or preference. Hopefully you get the point that there is diversity across various environments as well as types of data and applications, technologies, tools and techniques.

StorageIO industry trends cloud, virtualization and big data

What about little data then?

I often say that if big data is getting all the marketing dollars to generate industry adoption, then little data is generating all the revenue (and profit or margin) dollars by customer deployment. While tools and technologies related to Hadoop (or Haydoop if you are from HDS) are getting industry adoption attention (e.g. marketing dollars being spent) revenues from customer deployment are growing.

Where big data revenues are strongest for most vendors today are centered around solutions for hosting, storing, managing and protecting big files, big objects. These include scale out NAS solutions for large unstructured data like those from Amplidata, Cray, Dell, Data Direct Networks (DDN), EMC (e.g. Isilon), HP X9000 (IBRIX), IBM SONAS, NetApp, Oracle and Xyratex among others. Then there flexible converged compute storage platforms optimized for analytics and running different software tools such as those from EMC (Greenplum), IBM (Netezza), NetApp (via partnerships) or Oracle among others that can be used for different purposes in addition to supporting Hadoop and Map reduce.

If little data is databases and things not generally lumped into the big data bucket, and if you think or perceive big data only to be Hadoop map reduce based data, then does that mean all the large unstructured non little data is then very big data or VBD?

StorageIO industry trends cloud, virtualization and big data

Of course the virtualization folks might want to if they have not already corner the V for Virtual Big Data. In that case, then instead of Very Big Data, how about very very Big Data (vvBD). How about Ultra-Large Big Data (ULBD), or High-Revenue Big Data (HRBD), granted the HR might cause some to think its unique for Health Records, or Human Resources, both btw leverage different forms of big data regardless of what you see or think big data is.

Does that then mean we should really be calling videos, audio, PACs, seismic, security surveillance video and related data to be VBD? Would this further confuse the market, or the industry or help elevate it to a grander status in terms of size (data file or object capacity, bandwidth, market size and application usage, market revenue and so forth)?

Do we need various industry consortiums, lobbyists or trade groups to go off and create models, taxonomies, standards and dictionaries based on their constituents needs and would they align with those of the customers, after all, there are big dollars flowing around big data industry adoption (marketing).

StorageIO industry trends cloud, virtualization and big data

What does this all mean?

Is Big Data BS?

First let me be clear, big data is not BS, however there is a lot of BS marketing BS by some along with hype and fud adding to the confusion and chaos, perhaps even missed opportunities. Keep in mind that in chaos and confusion there can be opportunity for some.

IMHO big data is real.

There are different variations, use cases and types of products, technologies and services that fall under the big data umbrella. That does not mean everything can or should fall under the big data umbrella as there is also little data.

What this all means is that there are different types of applications for various industries that have big and little data, virtual and very big data from videos, photos, images, audio, documents and more.

Big data is a big buzzword bingo term these days with vendor marketing big dollars being applied so no surprise the buzz, hype, fud and more.

Ok, nuff said, for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Industry trends and perspectives: SNW 2012 Rapping with Dave Raffo of SearchStorage

Now also available via

This is the seventh (here is the first, second, third, fourth, fifth and sixth) in a series of StorageIO industry trends and perspective audio blog and pod cast discussions from Storage Networking World (SNW) Fall 2012 in Santa Clara California.

StorageIO industry trends cloud, virtualization and big data

Given how at conference conversations tend to occur in the hallways, lobbies and bar areas of venues, what better place to have candid conversations with people from throughout the industry, some you know, some you will get to know better.

In this episode, my co-host Bruce Rave aka Bruce Ravid of Ravid and Associates (twitter @brucerave) meets up Sr. News Director Dave Raffo of TechTarget and Search Storage in the SNW trade show expo hall. Our conversation covers past and present SNWs along with other industry conferences, industry trends, software defined buzzwords, Green Bay Packers smack and more.

Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Dave, Bruce and myself.

StorageIO podcast

Also available via

Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts from SNW and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

Enjoy listening to Rapping with Dave Raffo of Search Storage from the Fall SNW 2012 pod cast.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved