Fall 2013 Dutch cloud, virtual and storage I/O seminars

Storage I/O trends

Fall 2013 Dutch cloud, virtual and storage I/O seminars

It is that time of the year again when StorageIO will be presenting a series of seminar workshops in the Netherlands on cloud, virtual and data storage networking technologies, trends along with best practice techniques.

Brouwer Storage

StorageIO partners with the independent firm Brouwer Storage Consultancy of Holland who organizes these sessions. These sessions will also mark Brouwer Storage Consultancy celebrating ten years in business along with a long partnership with StorageIO.

Server Storage I/O Backup and Data Protection Cloud and Virtual

The fall 2013 Dutch seminars include coverage of storage I/O networking data protection and related trends topics for cloud and virtual environments. Click on the following links or images to view an abstract of the three sessions including what you will learn, who they are for, buzzwords, themes, topics and technologies that will covered.

Modernizing Data Protection
Moving Beyond Backup and Restore

Storage Industry Trends
What’s News, What’s The Buzz and Hype

Storage Decision Making
Acquisition, Deployment, Day to Day Management

Modern Data Protection
Modern Data Protection
Modern Data Protection
September 30 & October 1
October 2 2013
October 3 and 4 2013

All seminar workshop seminars are presented in a vendor technology neutral including (e.g. these are not vendor marketing sales presentations) providing independent perspectives on industry trends, who is doing what, benefits, caveats of various approaches to addressing data infrastructure and storage challenges. View posts about earlier events here and here.

Storage I/O trends

As part of theme of being vendor and technology neutral, the workshop seminars are held off-site at hotel venues in Nijkerk Netherlands so no need to worry about the sales teams coming in to sell you something during the breaks or lunch which are provided. There are also opportunities throughout the workshops for engagement, discussion and interaction with other attendees that includes your peers from various commercial, government and service providers among others.

Learn more and register for these events by visiting the Brouwer Storage Consultancy website page (here) and calling them at +31-33-246-6825 or via email info@brouwerconsultancy.com.

Storage I/O events

View other upcoming and recent StorageIO activities including live in-person, online web and recorded activities on our events page here, as well as check out our commentary and industry trends perspectives in the news here.

Bitter ballen
Ok, nuff said, I’m already hungry for bitter ballen (see above)!

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Server and Storage IO Memory: DRAM and nand flash

Storage I/O trends

DRAM, DIMM, DDR3, nand flash memory, SSD, stating what’s often assumed

Often what’s assumed is not always the case. For example in along with around server, storage and IO networking circles including virtual as well as cloud environments terms such as nand (Negated AND or NOT And) flash memory aka (Solid State Device or SSD), DRAM (Dynamic Random Access Memory), DDR3 (Double Data Rate 3) not to mention DIMM (Dual Inline Memory Module) get tossed around with the assumption everybody must know what they mean.

On the other hand, I find plenty of people who are not sure what those among other terms or things are, sometimes they are even embarrassed to ask, particular if they are a self-proclaimed expert.

So for those who need a refresh or primer, here you go, an excerpt from Chapter 7 (Servers – Physical, Virtual and Software) from my book "The Green and Virtual Data Center" (CRC Press) available at Amazon.com and other global venues in print and ebook formats.

7.2.2 Memory

Computers rely on some form of memory ranging from internal registers, local on-board processor Level 1 (L1) and Level 2 (L2) caches, random accessible memory (RAM), non-volatile RAM (NVRAM) or nand Flash (SSD) along with external disk storage. Memory, which includes external disk storage, is used for storing operating system software along with associated tools or utilities, application programs and data. Main memory or RAM, also known as dynamic RAM (DRAM) chips, is packaged in different ways with a common form being dual inline memory modules (DIMMs) for notebook or laptop, desktop PC and servers.

RAM main memory on a server is the fastest form of memory, second only to internal processor or chip based registers, L1, L2 or local memory. RAM and processor based memories are volatile and non-persistent in that when power is removed, the contents of memory are lost. As a result, some form of persistent memory is needed to keep programs and data when power is removed. Read only memory (ROM) and NVRAM are both persistent forms of memory in that their contents are not lost when power is removed. The amount of RAM that can be installed into a server will vary with specific architecture implementation and operating software being used. In addition to memory capacity and packaging format, the speed of memory is also important to be able to move data and programs quickly to avoid internal bottlenecks. Memory bandwidth performance increases with the width of the memory bus in bits and frequency in MHz. For example, moving 8 bytes on a 64 bit buss in parallel at the same time at 100MHz provides a theoretical 800MByte/sec speed.

To improve availability and increase the level of persistence, some servers include battery backed up RAM or cache to protect data in the event of a power loss. Another technique to protect memory data on some servers is memory mirroring where twice the amount of memory is installed and divided into two groups. Each group of memory has a copy of data being stored so that in the event of a memory failure beyond those correctable with standard parity and error correction code (ECC) no data is lost. In addition to being fast, RAM based memories are also more expensive and used in smaller quantities compared to external persistent memories such as magnetic hard disk drives, magnetic tape or optical based memory medias.

Memory diagram
Memory and Storage Pyramid

The above shows a tiered memory model that may look familiar as the bottom part is often expanded to show tiered storage. At the top of the memory pyramid is high-speed processor memory followed by RAM, ROM, NVRAM and FLASH along with many forms of external memory commonly called storage. More detail about tiered storage is covered in chapter 8 (Data Storage – Data Storage – Disk, Tape, Optical, and Memory). In addition to being slower and lower cost than RAM based memories, disk storage along with NVRAM and FLASH based memory devices are also persistent.

By being persistent, when power is removed, data is retained on the storage or memory device. Also shown in the above figure is that on a relative basis, less energy is used for power storage or memory at the bottom of the pyramid than for upper levels where performance increases. From a PCFE (Power, Cooling, Floor space, Economic) perspective, balancing memory and storage performance, availability, capacity and energy to a given function, quality of service and service level objective for a given cost needs to be kept in perspective and not considering simply the lowest cost for the most amount of memory or storage. In addition to gauging memory on capacity, other metrics include percent used, operating system page faults and page read/write operations along with memory swap activity as well memory errors.

Base 2 versus base 10 numbering systems can account for some storage capacity that appears to “missing” when real storage is compared to what is expected to be seen. Disk drive manufacturers use base 10 (decimal) to count bytes of data while memory chip, server and operating system vendors typically use base 2 (binary) to count bytes of data. This has led to confusion when comparing a disk drive base 10 GB with a chip memory base 2 GB of memory capacity, such as 1,000,000,000 (10^9) bytes versus 1,073,741,824 (2^30) bytes. Nomenclature based on the International System of Units uses MiB, GiB and TiB to denote million, billion and trillion bytes for base 2 numbering with base 10 using MB, TB and GB . Most vendors do document how many bytes, sometimes in both base 2 and base 10, as well as the number of 512 byte sectors supported on their storage devices and storage systems, though it might be in the small print.

Related more reading:
How much storage performance do you want vs. need?
Can RAID extend the life of nand flash SSD?
Can we get a side of context with them IOPS and other storage metrics?
SSD & Real Estate: Location, Location, Location
What is the best kind of IO? The one you do not have to do
SSD, flash and DRAM, DejaVu or something new?

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier).

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Web chat Thur May 30th: Hot Storage Trends for 2013 (and beyond)

Storage I/O trends

Join me on Thursday May 30, 2013 at Noon ET (9AM PT) for a live web chat at the 21st Century IT (21cit) site (click here to register, sign-up, or view earlier posts). This will be an online web chat format interactive conversation so if you are not able to attend, you can visit at your convenience to view and give your questions along with comments. I have done several of these web chats with 21cit as well as other venues that are a lot of fun and engaging (time flies by fast).

For those not familiar, 21cIT is part of the Desum/UBM family of sites including Internet Evolution, SMB Authority, and Enterprise Efficiency among others that I do article posts, videos and live chats for.


Sponsored by NetApp

I like these types of sites in that while they have a sponsor, the content is generally kept separate between those of editors and contributors like myself and the vendor supplied material. In other words I coordinate with the site editors on what topics I feel like writing (or doing videos) about that align with the given sites focus and themes as opposed to following and advertorial calendar script.

During this industry trends perspective web chat, one of the topics and themes planned for discussion include software defined storage (SDS). View a recent video blog post I did here about SDS. In addition to SDS, Solid State Devices (SSD) including nand flash, cloud, virtualization, object, backup and data protection, performance, management tools among others are topics that will be put out on the virtual discussion table.

Storage I/O trends

Following are some examples of recent and earlier industry trends perspectives posts that I have done over at 21cit:

Video: And Now, Software-Defined Storage!
There are many different views on what is or is not “software-defined” with products, protocols, preferences and even press releases. Check out the video and comments here.

Big Data and the Boston Marathon Investigation
How the human face of big-data will help investigators piece together all the evidence in the Boston bombing tragedy and bring those responsible to justice. Check out the post and comments here.

Don’t Use New Technologies in Old Ways
You can add new technologies to your data center infrastructure, but you won’t get the full benefit unless you update your approach with people, processes, and policies. Check out the post and comments here.

Don’t Let Clouds Scare You, Be Prepared
The idea of moving to cloud computing and cloud services can be scary, but it doesn’t have to be so if you prepare as you would for implementing any other IT tool. Check out the post and comments here.

Storage and IO trends for 2013 (& Beyond)
Efficiency, new media, data protection, and management are some of the keywords for the storage sector in 2013. Check out these and other trends, predictions along with comments here.

SSD and Real Estate: Location, Location, Location
You might be surprised how many similarities between buying real estate and buying SSDs.
Location matters and it’s not if, rather when, where, why and how you will be using SSD including nand flash in the future, read more and view comments here.

Everything Is Not Equal in the Data center, Part 3
Here are steps you can take to give the right type of backup and protection to data and solutions, depending on the risks and scenarios they face. The result? Savings and efficiencies. Read more and view comments here.

Everything Is Not Equal in the Data center, Part 2
Your data center’s operations can be affected at various levels, by multiple factors, in a number of degrees. And, therefore, each scenario requires different responses. Read more and view comments here.

Everything Is Not Equal in the Data center, Part 1
It pays to check your data center Different components need different levels of security, storage, and availability. Read more and view comments here.

Data Protection Modernizing: More Than Buzzword Bingo
IT professionals and solution providers should put technologies such as disk based backup, dedupe, cloud, and data protection management tools as assets and resources to make sure they receive necessary funding and buy in. Read more and view comments here.

Don’t Take Your Server & Storage IO Pathing Software for Granted
Path managers are valuable resources. They will become even more useful as companies continue to carry out cloud and virtualization solutions. Read more and view comments here.

SSD Is in Your Future: Where, When & With What Are the Questions
During EMC World 2012, EMC (as have other vendors) made many announcements around flash solid-state devices (SSDs), underscoring the importance of SSDs to organizations future storage needs. Read more here about why SSD is in your future along with view comments.

Changing Life cycles and Data Footprint Reduction (DFR), Part 2
In the second part of this series, the ABCDs (Archive, Backup modernize, Compression, Dedupe and data management, storage tiering) of data footprint reduction, as well as SLOs, RTOs, and RPOs are discussed. Read more and view comments here.

Changing Life cycles and Data Footprint Reduction (DFR), Part 1
Web 2.0 and related data needs to stay online and readily accessible, creating storage challenges for many organizations that want to cut their data footprint. Read more and view comments here.

No Such Thing as an Information Recession
Data, even older information, must be protected and made accessible cost-effectively. Not to mention that people and data are living longer as well as getting larger. Read more and view comments here.

Storage I/O trends

These real-time, industry trends perspective interactive chats at 21cit are open forum format (however be polite and civil) as well as non vendor sales or marketing pitches. If you have specific questions you ‘d like to ask or points of view to express, click here and post them in the chat room at any time (before, during or after).

Mark your calendar for this event live Thursday, May 30, at noon ET or visit after the fact.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

April 2013 Server and StorageIO Update Newsletter

StorageIO News Letter Image
April 2013 News letter

Welcome to the April 2013 edition of the StorageIO Update. This edition includes more on nand flash SSD, after all its not if, rather when, where, why, with what along with how much SSD is in your future. Also more on object storage, clouds, big data and little data, HDDs, SNW, backup/restore, HA, BC, DR and data protection along with data center topics and trends.

You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

Click on the following links to view the April 2013 edition as (HTML sent via Email) version, or PDF versions.

Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Ok Nuff said, for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Welcome to the Cloud Bulk Object Storage Resources Center

Updated 8/31/19

Cloud Bulk Big Data Software Defined Object Storage Resources

server storage I/O trends Object Storage resources

Welcome to the Cloud, Big Data, Software Defined, Bulk and Object Storage Resources Center Page objectstoragecenter.com.

This object storage resources, along with software defined, cloud, bulk, and scale-out storage page is part of the server StorageIOblog microsite collection of resources. Software-defined, Bulk, Cloud and Object Storage exist to support expanding and diverse application data demands.

Other related resources include:

  • Software Defined, Cloud, Bulk and Object Storage Fundamentals
  • Software Defined Data Infrastructure Essentials book (CRC Press)
  • Cloud, Software Defined, Scale-Out, Object Storage News Trends
  •  Object storage SDDC SDDI
    Via Software Defined Data Infrastructure Essentials (CRC Press 2017)

    Bulk, Cloud, Object Storage Solutions and Services

    There are various types of cloud, bulk, and object storage including public services such as Amazon Web Services (AWS) Simple Storage Service (S3), Backblaze, Google, Microsoft Azure, IBM Softlayer, Rackspace among many others. There are also solutions for hybrid and private deployment from Cisco, Cloudian, CTERA, Cray, DDN, Dell EMC, Elastifile, Fujitsu, Vantera/HDS, HPE, Hedvig, Huawei, IBM, NetApp, Noobaa, OpenIO, OpenStack, Quantum, Rackspace, Rozo, Scality, Spectra, Storpool, StorageCraft, Suse, Swift, Virtuozzo, WekaIO, WD, among many others.

    Bulk Cloud Object storage SDDC SDDI
    Via Software Defined Data Infrastructure Essentials (CRC Press 2017)

    Cloud products and services among others, along with associated data infrastructures including object storage, file systems, repositories and access methods are at the center of bulk, big data, big bandwidth and little data initiatives on a public, private, hybrid and community basis. After all, not everything is the same in cloud, virtual and traditional data centers or information factories from active data to in-active deep digital archiving.

    Object Context Matters

    Before discussing Object Storage lets take a step back and look at some context that can clarify some confusion around the term object. The word object has many different meanings and context, both inside of the IT world as well as outside. Context matters with the term object such as a verb being a thing that can be seen or touched as well as a person or thing of action or feeling directed towards.

    Besides a person, place or physical thing, an object can be a software-defined data structure that describes something. For example, a database record describing somebody’s contact or banking information, or a file descriptor with name, index ID, date and time stamps, permissions and access control lists along with other attributes or metadata. Another example is an object or blob stored in a cloud or object storage system repository, as well as an item in a hypervisor, operating system, container image or other application.

    Besides being a verb, an object can also be a noun such as disapproval or disagreement with something or someone. From an IT context perspective, an object can also refer to a programming method (e.g. object-oriented programming [oop], or Java [among other environments] objects and classes) and systems development in addition to describing entities with data structures.

    In other words, a data structure describes an object that can be a simple variable, constant, complex descriptor of something being processed by a program, as well as a function or unit of work. There are also objects unique or with context to specific environments besides Java or databases, operating systems, hypervisors, file systems, cloud and other things.

    The Need For Bulk, Cloud and Object Storage

    There is no such thing as an information recession with more data being generated, moved, processed, stored, preserved and served, granted there are economic realities. Likewise as a society our dependence on information being available for work or entertainment, from medical healthcare to social media and all points in between continues to increase (check out the Human Face of Big Data).

    In addition, people and data are living longer, as well as getting larger (hence little data, big data and very big data). Cloud products and services along with associated object storage, file systems, repositories and access methods are at the center of big data, big bandwidth and little data initiatives on a public, private, hybrid and community basis. After all, not everything is the same in cloud, virtual and traditional data centers or information factories from active data to in-active deep digital archiving.

    Click here to view (and hear) more content including cloud and object storage fundamentals

    Click here to view software defined, bulk, cloud and object storage trend news

    cloud object storage

    Where to learn more

    The following resources provide additional information about big data, bulk, software defined, cloud and object storage.



    Via InfoStor: Object Storage Is In Your Future
    Via FujiFilm IT Summit: Software Defined Data Infrastructures (SDDI) and Hybrid Clouds
    Via MultiChannel: After ditching cloud business, Verizon inks Virtual Network Services deal with Amazon
    Via MultiChannel: Verizon Digital Media Services now offers integrated Microsoft Azure Storage
    Via StorageIOblog: AWS EFS Elastic File System (Cloud NAS) First Preview Look
    Via InfoStor: Cloud Storage Concerns, Considerations and Trends
    Via InfoStor: Object Storage Is In Your Future
    Via Server StorageIO: April 2015 Newsletter Focus on Cloud and Object storage
    Via StorageIOblog: AWS S3 Cross Region Replication storage enhancements
    Cloud conversations: AWS EBS, Glacier and S3 overview
    AWS (Amazon) storage gateway, first, second and third impressions
    Cloud and Virtual Data Storage Networking (CRC Book)

    View more news, trends and related cloud object storage activity here.

    Videos and podcasts at storageio.tv also available via Applie iTunes.

    Human Face of Big Data
    Human Face of Big Data (Book review)

    Seven Databases in Seven weeks Seven Databases in Seven Weeks (Book review)

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Object and cloud storage are in your future, the questions are when, where, with what and how among others.

    Watch for more content and links to be added here soon to this object storage center page including posts, presentations, pod casts, polls, perspectives along with services and product solutions profiles.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Are your analyst, blogger, media or press requests being read?

    Storage I/O cloud virtual and big data perspectives

    Are you a marketing or public relations, press or analyst relations or social media expert and your email messages, notes, updates or requests get overlooked? Are you in the first paragraph or couple of sentences indicating who you are representing, what the info or request is about as well as what call to action you are looking for? If you are doing what is done in many of the requests for coverage or meetings or even announcements, you may be getting overlooked resulting in a missed opportunity, that is unless your goal is to simply gauge how many requests you send out.

    Now do I have your attention?

    Time for some tough love.

    Almost every day (not as much on weekends) my email inbox gets filled up with notes from vendor marketing and public relations (PR) firms organizations telling about an announcement, requesting a briefing, giving heads up for something that will be occurring, pitching a story, idea or looking for coverage.

    Being an analyst, author, advisory consultant, blogger among other roles, getting all of those emails as well as phone calls or other messages comes with the territory. However to get to the point or, the point of this post, most of those notes or messages I receive do not get to the point. More importantly I’m noticing on an increasing basis that many of the request for meetings, coverage, or introductions to announcements are not passing the quick scan test.

    Storage I/O cloud virtual and big data perspectives

    The trend that I have noticed for a couple of years now is that what is being announced or what the topic is or worse, who it is about gets lost in the paragraph or two or three or more of content. Thus for those who get lots of messages a day, get to the point, save the prose and sample article or content for later, unless that is what you are sending.

    In other words, tell the reader who you are representing (e.g. the company), product or focus area, what the news is about, why relevant, and call to action. Skip the long intro citing marketing research or testimonials and what read like mini articles, not to mention mentioning other vendors that might cause a quick scan to confuse the message about them vs. you or your client.

    The other day I received one of many announcements or briefing and meeting requests and this one stood out. In fact it stood out so much not for what was actually being announced or by whom or what it pertained to (all of which are relevant btw). It stood out because it did what most are not doing these days. It got to the point and in about the time required for a sip of coffee, I new what I needed to know as opposed to a trip to the coffee pot to refill to read the piece.

    The note impressed me so much that I asked Melissa Kolodziej who sent it to me if I could repost her note here as an example of how to do it.

    Subject: News: Attunity Enhances Big Data Replication Solution for Data Warehousing & Cloud Initiatives

    Dear Greg,
    Good morning. I wanted to share with you the exciting product news we announced regarding  Attunity Replicate 2.1. This latest release of our high-performance data delivery solution now features several new enhancements for data warehousing and the cloud. One key optimization is the addition of Attunity  TurboStream CDC, an innovative feature designed to significantly enhance delivery performance of changed data. This proprietary technology stages source data, consolidates changes, and delivers data in parallel to the target, optimizing change data capture (CDC) for high-volume and low-bandwidth scenarios. The enhanced Attunity Replicate solution is ideal for strategic initiatives including Big Data analytics and business intelligence typically seen in data warehouse and cloud environments.

    Please review the release below and contact me at your earliest convenience (Tel. 781-730-4073) to set up a time to interview Matt Benati, Attunity’s VP of Global Marketing. He is available for briefings this week and next.  I would also appreciate hearing back from you if you plan to cover this news.

    Thank you for your time and best regards,

    Melissa Kolodziej
    Director of Marketing Communications, Attunity
    Tel. 781-730-4073
    melissa.kolodziej@attunity.com

    I thought about showing an example of what not to do, however for now, lets leave sleeping dogs lay where the rest. Although I will say make sure you put a name in the name field for your email blasts vs. receiving Dear <Name_Goes_Here> or using the wrong name. The wrong name I’m used to however I may respond and have some fun at your expense so that you don’t forget ;).

    Kudos and nice job Melissa for doing what should be common knowledge or basic best practices however what I now see as the exception vs. the norm from many public relations firms or vendors.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Some things keep going around, Seagate ships 2 Billion HDD’s

    Seagate

    Seagate (@Seagate) announced today that it reached a milestone of having shipped 2 Billion hard disk drives (HDD’s), something that is round stores data that keeps growing. As part of their announcement, Seagate has a good info graphics and facts page here going back to 1979 when it was founded as Shugart Technology (read about Al Shugart here).

    By coincidence, just a few years before Seagate was founded, McDonalds (who makes round things as well) announced that they had served over 20 billion hamburgers. Thus McDonald feeds the appetites of consumers hungry for a quick meal while Seagate feeds the information demands, perhaps while stopping for a quick breakfast, lunch, coffee or dinner. Speaking of things that go around (like HDD’s), check out what NAS, NASA and NASCAR have in common all of which are also involved in big data as well as little data.

    Storage I/O industry trends image

    Both Seagate and McDonalds have also expanded their menu of offerings over the years maintaining their core products while expanding into new and adjacent areas given different appetites and preferences. After all, in the data cloud, virtual or physical data center also known as an information factory not everything is the same either.

    Cloud

    Granted Seagate is helping to feed or fuel the internet along with traditional hungry demand for data, not to mention people and data are living longer, as well as getting larger.

    Cloud, virtual server, big data and little data storage I/O image

    In the case of Seagate and other driver manufactures of which have consolidated down to three (Toshiba, Seagate and Western Digital), the physical devices are getting smaller, however capacities are increasing.

    Storage I/O

    Why the continued growth? As mentioned data is getting larger (big data and little data) and living longer, there is also no such thing as a data or information recession. Consequently data storage is an important pillar or part of cloud, virtual and traditional information services with HDD’s remaining popular along side nand flash solid state devices (SSD).

    The Seagate info graphic page can be seen here and is a good walk back in time for some, perhaps a history lesson for others. It goes back to the Sony Walkman which some might remember, launch of the PC and Apple Macintosh in the 80s, Linux and the web in the 90s and moving forward from then to now.

    HDD
    A few of my HDD’s, different types for various tasks.

    If you think or believe HDD’s are a dead technology, take a few minutes to view the info graphic to update your insight on what has been an important aspect of computing and remains popular in cloud environments. Otoh, if you believe that HDD’s are still a core piece of computing and will remain so including in roles in the future, have a look to see how things have progressed, maybe some Dejavu.

    Oh, for those who are thinking that the HDD did not begin in 1979, you are absolutely correct as it dates back into the 1950s. Here is a link to something that I wrote a few years ago on the HDD’s 50th birthday and looks like it will easily celebrate 60 and beyond.

    Additional related reading:
    In the data center or information factory, not everything is the same
    Hard Disk Drives (HDDs) for virtual and physical environments
    Happy 50th, hard drive. But will you make it to 60?
    Seagate to say goodbye to Cayman Islands, Hello Ireland
    More Storage IO momentus HHDD and SSD moments part II
    In the data center or information factory, not everything is the same
    The Human Face of Big Data, a Book Review

    Congratulations to Seagate, now how long until the 3 billion served, excuse me, shipped HDD occurs?

    Disclosure: Its been almost a month since my last visit to McDonalds or buying another HDD (or SSD) from Amazon.com.

    Ok, nuff said

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    NetApp EF540, something familiar, something new

    StorageIO Industry trends and perspectives image

    NetApp announced the other day a new all nand flash solid-state devices (SSD) storage system called the EF540 that is available now. The EF540 has something’s new and cool, along with some things familiar, tried, true and proven.

    What is new is that the EF540 is an all nand flash multi-level cell (MLC) SSD storage system. What is old is that the EF540 is based on the NetApp E-Series (read more here and here) and SANtricity software with hundreds of thousands installed systems. As a refresher, the E-Series are the storage system technologies and solutions obtained via the Engenio acquisition from LSI in 2011.

    Image of NetApp EF540 via ntapgeek.com
    Image via www.ntapgeek.com

    The EF540 expands the NetApp SSD flash portfolio which includes products such as FlashCache (read cache aka PAM) for controllers in ONTAP based storage systems. Other NetApp items in the NetApp flash portfolio include FlashPool SSD drives for persistent read and write storage in ONTAP based systems. Complimenting FlashCache and FlashPool is the server-side PCIe caching card and software FlashAccel. NetApp is claiming to have revenue shipped 36PB of flash complimenting over 3 Exabytes (EB) of storage while continuing to ship a large amount of SAS and SATA HDD’s.

    NetApp also previewed its future FlashRay storage system that should appear in beta later in 2013 and general availability in 2014.

    In addition to SSD and flash related announcements, NetApp also announced enhancements to its ONTAP FAS/V6200 series including the FAS/V6220, FAS/V6250 and FAS/V6290.

    Some characteristics of the NetApp EF540 and SANtricity include:

    • Two models with 12 or 24 x 6Gbs SAS 800GB MLC SSD devices
    • Up to 9.6TB or 19.2TB physical storage in a 2U (3.5 inch) tall enclosure
    • Dual controllers for redundancy, load-balancing and availability
    • IOP performance of over 300,000 4Kbyte random 100% reads under 1ms
    • 6GByte/sec performance of 512Kbyte sequential reads, 5.5Gbyte/sec random reads
    • Multiple RAID levels (0, 1, 10, 3, 5, 6) and flexible group sizes
    • 12GB of DRAM cache memory in each controller (mirrored)
    • 4 x 8GFC host server-side ports per controller
    • Optional expansion host ports (6Gb SAS, 8GFC, 10Gb iSCSI, 40Gb IBA/SRP)
    • Snapshots and replication (synchronous and asynchronous) including to HDD systems
    • Can be used for traditional IOP intensive little-data, or bandwidth for big-data
    • Proactive SSD wear monitoring and notification alerts
    • Utilizes SANtricity version 10.84

    Poll, Are large storage arrays day’s numbered?

    EMC and NetApp (along with other vendors) continue to sell large numbers of HDD’s as well as large amounts of SSD. Both EMC and NetApp are taking similar approaches of leveraging PCIe flash cards as cache adding software functionality to compliment underlying storage systems. The benefit is that the cache approach is less disruptive for many environments while allowing improved return on investment (ROI) of existing assets.

    EMC

    NetApp

    Storage systems with HDD and SSD

    VMAX, VNX

    FAS/V, E-Series

    Storage systems with SSD cache

    FastCache,

    FlashCache

    All SSD based storage

    VMAX, VNX

    EF540

    All new SSD system in development

    Project X

    FlashRay

    Server side PCIe SSD cache

    VFCache

    FlashAcell

    Partner ecosystems

    Yes

    Yes

    The best IO is the one that you do not have to do, however the next best are those that have the least cost or affect which is where SSD comes into play. SSD is like real estate in that location matters in terms of providing benefit, as well as how much space or capacity is needed.

    What does this all mean?
    The NetApp EF540 based on the E-Series storage system architecture is like one of its primary competitors (e.g. EMC VNX also available as an all-flash model). The similarity is that both have been competitors, as well as have been around for over a decade with hundreds of thousands of installed systems. The similarities are also that both continue to evolve their code base leveraging new hardware and software functionality. These improvements have resulted in improved performance, availability, capacity, energy effectiveness and cost reduction.

    Whats your take on RAID still being relevant?

    From a performance perspective, there are plenty of public workloads and benchmarks including Microsoft ESRP and SPC among others to confirm its performance. Watch for NetApp to release EF540 SPC results given their history of doing so with other E-Series based systems. With those or other results, compare and contrast to other solutions looking not just at IOPS or MB/sec (bandwidth), also latency, functionality and cost.

    What does the EF540 compete with?
    The EF540 competes with all flash-based SSD solutions (Violin, Solidfire, Purestorage, Whiptail, Kaminario, IBM/TMS, up-coming EMC Project “X” (aka XtremeIO)) among others. Some of those systems use general-purpose servers combined SSD drives, PCIe cards along with management software where others leverage customized platforms with software. To a lesser extent, competition will also be mixed mode SSD and HDD solutions along with some PCIe target SSD cards for some situations.

    What to watch and look for:
    It will be interesting to view and contrast public price performance results using SPC or Microsoft ESRP among others to see how the EF540 compares. In addition, it will be interesting to compare other storage based, as well as SSD systems beyond the number of IOPS. What will be interesting is to keep an eye on latency, as well as bandwidth, feature functionality and associated costs.

    Given that the NetApp E-Series are OEM or sold by third parties, let’s see if something looking similar or identical to the EF540 appear at any of those or new partners. This includes traditional general purpose and little-data environments, along with cloud, managed service provider, high performance compute and high productivity compute (HPC), super computer (SC), big data and big bandwidth among others.

    Poll, Have SSD been successful in traditional storage systems and arrays

    The EF540 could also appear as a storage or IO accelerator for large-scale out, clustered, grid and object storage systems for meta data, indices, key value stores among other uses either direct attached to servers, or via shared iSCSI, SAS, FC and InfiniBand (IBA) SCSI Remote Protocol (SRP).

    Keep an eye on how the startups that have been primarily Just a Bunch Of SSD (JBOS) in a box start talking about adding new features and functionality such as snapshots, replication or price reductions. Also, keep an eye and ear open to what EMC does with project “X” along with NetApp FlashRay among other improvements.

    For NetApp customers, prospects, partners, E-Series OEMs and their customers with the need for IO consolidation, or performance optimization for big-data, little-data and related applications the EF540 opens up new opportunities and should be good news. For EMC competitors, they now have new competition which also signals an expanding market with new opportunities in adjacent areas for growth. This also further signals the need for diverse ssd portfolios and product options to meet different customer application needs, along with increased functionality vs. lowest cost for high capacity fast nand SSD storage.

    Some related reading:

    Disclosure: NetApp, Engenio (when LSI), EMC and TMS (now IBM) have been clients of StorageIO.

    Ok, nuff said

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Thanks for viewing StorageIO content and top 2012 viewed posts

    StorageIO industry trends cloud, virtualization and big data

    2012 was a busy year (it was our 7th year in business) along with plenty of activity on StorageIOblog.com as well as on the various syndicate and other sites that pickup our content feed (https://storageioblog.com/RSSfull.xml).

    Excluding traditional media venues, columns, articles, web casts and web site visits (StorageIO.com and StorageIO.TV), StorageIO generated content including posts and pod casts have reached over 50,000 views per month (and growing) across StorageIOblog.com and our partner or syndicated sites. Including both public and private, there were about four dozen in-person events and activities not counting attending conferences or vendor briefing sessions, along with plenty of industry commentary. On the twitter front, plenty of activity there as well closing in on 7,000 followers.

    Thank you to everyone who have visited the sites where you will find StorageIO generated content, along with industry trends and perspective comments, articles, tips, webinars, live in person events and other activities.

    In terms of what was popular on the StorageIOblog.com site, here are the top 20 viewed posts in alphabetical order.

    Amazon cloud storage options enhanced with Glacier
    Announcing SAS SANs for Dummies book, LSI edition
    Are large storage arrays dead at the hands of SSD?
    AWS (Amazon) storage gateway, first, second and third impressions
    EMC VFCache respinning SSD and intelligent caching
    Hard product vs. soft product
    How much SSD do you need vs. want?
    Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go
    Is SSD dead? No, however some vendors might be
    IT and storage economics 101, supply and demand
    More storage and IO metrics that matter
    NAD recommends Oracle discontinue certain Exadata performance claims
    New Seagate Momentus XT Hybrid drive (SSD and HDD)
    PureSystems, something old, something new, something from big blue
    Researchers and marketers dont agree on future of nand flash SSD
    Should Everything Be Virtualized?
    SSD, flash and DRAM, DejaVu or something new?
    What is the best kind of IO? The one you do not have to do
    Why FC and FCoE vendors get beat up over bandwidth?
    Why SSD based arrays and storage appliances can be a good idea

    Moving beyond the top twenty read posts on StorageIOblog.com site, the list quickly expands to include more popular posts around clouds, virtualization and data protection modernization (backup/restore, HA, BC, DR, archiving), general IT/ICT industry trends and related themes.

    I would like to thank the current StorageIOblog.com site sponsors Solarwinds (management tools including response time monitoring for physical and virtual servers) and Veeam (VMware and Hyper-V virtual server backup and data protection management tools) for their support.

    Thanks again to everyone for reading and following these and other posts as well as for your continued support, watch for more content on the above and other related and new topics or themes throughout 2013.

    Btw, if you are into Facebook, you can give StorageIO a like at facebook.com/storageio (thanks in advance) along with viewing our newsletter here.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    The Human Face of Big Data, a Book Review

    The Human Face of Big Data, a Book Review

    My copy of the new book The Human Face of Big Data created by Rick Smolan and Jennifer Erwitt arrived yesterday compliments of EMC (the lead sponsor). In addition to EMC, the other sponsors of the book are Cisco, VMware, FedEx, Originate and Tableau software.

    To say this is a big book would be an understatement, then again, big data is a big topic with a lot of diversity if you open your eyes and think in a pragmatic way, which once you open and see the pages you will see. This is physically a big book (11x 14 inches) with lots of pictures, texts, stories, factoids and thought stimulating information of the many facets and dimensions of big data across 224 pages.

    While Big Data as a buzzword and industry topic theme might be new, along with some of the related technologies, techniques and focus areas, other as aspects have been around for some time. Big data means many things to various people depending on their focus or areas of interest ranging from analytics to images, videos and other big files. A common theme is the fact that there is no such thing as an information or data recession, and that people and data are living longer, getting larger, and we are all addicted to information for various reasons.

    Big data needs to be protected and preserved as it has value, or its value can increase over time as new ways to leverage it are discovered which also leads to changing data access and life cycle patterns. With many faces, facets and areas of interests applying to various spheres of influence, big data is not limited to programmatic, scientific, analytical or research, yet there are many current and use cases in those areas.

    Big data is not limited to videos for security surveillance, entertainment, telemetry, audio, social media, energy exploration, geosciences, seismic, forecasting or simulation, yet those have been areas of focus for years. Some big data files or objects are millions of bytes (MBytes), billion of bytes (GBytes) or trillion of bytes (TBytes) in size that when put into file systems or object repositories, add up to Exabytes (EB – 1000 TBytes) or Zettabytes (ZB – 1000 EBs). Now if you think those numbers are far-fetched, simply look back to when you thought a TByte, GByte let alone a MByte was big or far-fetched future. Remember, there is no such thing as a data or information recession, people and data are living longer and getting larger.

    Big data is more than hadoop, map reduce, SAS or other programmatic and analytical focused tool, solution or platform, yet those all have been and will be significant focus areas in the future. This also means big data is more than data warehouse, data mart, data mining, social media and event or activity log processing which also are main parts have continued roles going forward. Just as there are large MByte, GByte or TByte sized files or objects, there are also millions and billions of smaller files, objects or pieces of information that are part of the big data universe.

    You can take a narrow, product, platform, tool, process, approach, application, sphere of influence or domain of interest view towards big data, or a pragmatic view of the various faces and facets. Of course you can also spin everything that is not little-data to be big data and that is where some of the BS about big data comes from. Big data is not exclusive to the data scientist, researchers, academia, governments or analysts, yet there are areas of focus where those are important. What this means is that there are other areas of big data that do not need a data science, computer science, mathematical, statistician, Doctoral Phd or other advanced degree or training, in other words big data is for everybody.

    Cover image of Human Face of Big Data Book

    Back to how big this book is in both physical size, as well as rich content. Note the size of The Human Face of Big Data book in the adjacent image that for comparison purposes has a copy of my last book Cloud and Virtual Data Storage Networking (CRC), along with a 2.5 inch hard disk drive (HDD) and a growler. The Growler is from Lift Bridge Brewery (Stillwater, MN), after all, reading a big book about big data can create the need for a big beer to address a big thirst for information ;).

    The Human Face of Big Data is more than a coffee table or picture book as it is full of with information, factoids and perspectives how information and data surround us every day. Check out the image below and note the 2.5 inch HDD sitting on the top right hand corner of the page above the text. Open up a copy of The Human Face of Big Data and you will see examples of how data and information are all around us, and our dependence upon it.

    A look inside the book The Humand Face of Big Data image

    Book Details:
    Copyright 2012
    Against All Odds Productions
    ISBN 978-1-4549-0827-2
    Hardcover 224 pages, 11 x 0.9 x 14 inches
    4.8 pounds, English

    There is also an applet to view related videos and images found in the book at HumanFaceofBigData.com/viewer in addition to other material on the companion site www.HumanFacesofBigData.com.

    Get your copy of
    The Human Face of Big Data at Amazon.com by clicking here or at other venues including by clicking on the following image (Amazon.com).

    Some added and related material:
    Little data, big data and very big data (VBD) or big BS?
    How many degrees separate you and your information?
    Hardware, Software, what about Valueware?
    Changing Lifecycles and Data Footprint Reduction (Data doesnt have to lose value over time)
    Garbage data in, garbage information out, big data or big garbage?
    Industry adoption vs. industry deployment, is there a difference?
    Is There a Data and I/O Activity Recession?
    Industry trend: People plus data are aging and living longer
    Supporting IT growth demand during economic uncertain times
    No Such Thing as an Information Recession

    For those who can see big data in a broad and pragmatic way, perhaps using the visualization aspect this book brings forth the idea that there are and will be many opportunities. Then again for those who have a narrow or specific view of what is or is not big data, there is so much of it around and various types along with focus areas you too will see some benefits.

    Do you want to play in or be part of a big data puddle, pond, or lake, or sail and explore the oceans of big data and all the different aspects found in, under and around those bigger broader bodies of water.

    Bottom line, this is a great book and read regardless of if you are involved with data and information related topics or themes, the format and design lend itself to any audience. Broaden your horizons, open your eyes, ears and thinking to the many facets and faces of big data that are all around us by getting your copy of The Human Face of Big Data (Click here to go to Amazon for your copy) book.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Ceph Day Amsterdam 2012 (Object and cloud storage)

    StorageIO industry trends cloud, virtualization and big data

    Recently while I was in Europe presenting some sessions at conferences and doing some seminars, I was invited by Ed Saipetch (@edsai) of Inktank.com to attend the first Ceph Day in Amsterdam.

    Ceph day image

    As luck or fate would turn out, I was in Nijkerk which is about an hour train ride from Amsterdam central station plus a free day in my schedule. After a morning train ride and nice walk from Amsterdam Central I arrived at the Tobacco Theatre (a former tobacco trading venue) where Ceph Day was underway, and in time for lunch of Krokettens sandwich.

    Attendees at Ceph Day

    Lets take a quick step back and address for those not familiar what is Ceph (Cephalanthera) and why it was worth spending a day to attend this event. Ceph is an open source distributed object scale out (e.g. cluster or grid) software platform running on industry standard hardware.

    Dell server supporting ceph demoSketch of ceph demo configuration

    Ceph is used for deploying object storage, cloud storage and managed services, general purpose storage for research, commercial, scientific, high performance computing (HPC) or high productivity computing (commercial) along with backup or data protection and archiving destinations. Other software similar in functionality or capabilities to Ceph include OpenStack Swift, Basho Riak CS, Cleversafe, Scality and Caringo among others. There are also the tin wrapped software (e.g. appliances or pre-packaged) solutions such as Dell DX (Caringo), DataDirect Networks (DDN) WOS, EMC ATMOS and Centera, Amplidata and HDS HCP among others. From a service standpoint, these solutions can be used to build services similar Amazon S3 and Glacier, Rackspace Cloud files and Cloud Block, DreamHost DreamObject and HP Cloud storage among others.

    Ceph cloud and object storage architecture image

    At the heart of Ceph is RADOS a distributed object store that consists of peer nodes functioning as object storage devices (OSD). Data can be accessed via REST (Amazon S3 like) APIs, Libraries, CEPHFS and gateway with information being spread across nodes and OSDs using a CRUSH based algorithm (note Sage Weil is one of the authors of CRUSH: Controlled, Scalable, Decentralized Placement of Replicated Data). Ceph is scalable in terms of performance, availability and capacity by adding extra nodes with hard disk drives (HDD) or solid state devices (SSDs). One of the presentations pertained to DreamHost that was an early adopter of Ceph to make their DreamObjects (cloud storage) offering.

    Ceph cloud and object storage deployment image

    In addition to storage nodes, there are also an odd number of monitor nodes to coordinate and manage the Ceph cluster along with optional gateways for file access. In the above figure (via DreamHost), load balancers sit in front of gateways that interact with the storage nodes. The storage node in this example is a physical server with 12 x 3TB HDDs each configured as a OSD.

    Ceph dreamhost dreamobject cloud and object storage configuration image

    In the DreamHost example above, there are 90 storage nodes plus 3 management nodes, the total raw storage capacity (no RAID) is about 3PB (12 x 3TB = 36TB x 90 = 3.24PB). Instead of using RAID or mirroring, each objects data is replicated or copied to three (e.g. N=3) different OSDs (on separate nodes), where N is adjustable for a given level of data protection, for a usable storage capacity of about 1PB.

    Note that for more usable capacity and lower availability, N could be set lower, or a larger value of N would give more durability or data protection at higher storage capacity overhead cost. In addition to using JBOD configurations with replication, Ceph can also be configured with a combination of RAID and replication providing more flexibility for larger environments to balance performance, availability, capacity and economics.

    Ceph dreamhost and dreamobject cloud and object storage deployment image

    One of the benefits of Ceph is the flexibility to configure it how you want or need for different applications. This can be in a cost-effective hardware light configuration using JBOD or internal HDDs in small form factor generally available servers, or high density servers and storage enclosures with optional RAID adapters along with SSD. This flexibility is different from some cloud and object storage systems or software tools which take a stance of not using or avoiding RAID vs. providing options and flexibility to configure and use the technology how you see fit.

    Here are some links to presentations from Ceph Day:
    Introduction and Welcome by Wido den Hollander
    Ceph: A Unified Distributed Storage System by Sage Weil
    Ceph in the Cloud by Wido den Hollander
    DreamObjects: Cloud Object Storage with Ceph by Ross Turk
    Cluster Design and Deployment by Greg Farnum
    Notes on Librados by Sage Weil

    Presentations during ceph day

    While at Ceph day, I was able to spend a few minutes with Sage Weil Ceph creator and founder of inktank.com to record a pod cast (listen here) about what Ceph is, where and when to use it, along with other related topics. Also while at the event I had a chance to sit down with Curtis (aka Mr. Backup) Preston where we did a simulcast video and pod cast. The simulcast involved Curtis recording this video with me as a guest discussing Ceph, cloud and object storage, backup, data protection and related themes while I recorded this pod cast.

    One of the interesting things I heard, or actually did not hear while at the Ceph Day event that I tend to hear at related conferences such as SNW is a focus on where and how to use, configure and deploy Ceph along with various configuration options, replication or copy modes as opposed to going off on erasure codes or other tangents. In other words, instead of focusing on the data protection protocol and algorithms, or what is wrong with the competition or other architectures, the Ceph Day focused was removing cloud and object storage objections and enablement.

    Where do you get Ceph? You can get it here, as well as via 42on.com and inktank.com.

    Thanks again to Sage Weil for taking time out of his busy schedule to record a pod cast talking about Ceph, as well 42on.com and inktank for hosting, and the invitation to attend the first Ceph Day in Amsterdam.

    View of downtown Amsterdam on way to train station to return to Nijkerk
    Returning to Amsterdam central station after Ceph Day

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Garbage data in, garbage information out, big data or big garbage?

    StorageIO industry trends cloud, virtualization and big data

    Do you know the computer technology saying, garbage data in results in garbage information out?

    In other words even with the best algorithms and hardware, bad, junk or garbage data put in results in garbage information delivered. Of course, you might have data analysis and cleaning software to look for, find and remove bad or garbage data, however that’s for a different post on another day.

    If garbage data in results in garbage information out, does garbage big data in result in big garbage out?

    I’m sure my sales and marketing friends or their surrogates will jump at the opportunity to tell me why and how big data is the solution to the decades old garbage data in problem.

    Likewise they will probably tell me big data is the solution to problems that have not even occurred or been discovered yet, yeah right.

    However garbage data does not discriminate or show preference towards big data or little data, in fact it can infiltrate all types of data and systems.

    Lets shift gears from big and little data to how all of that information is protected, backed up, replicated, copied for HA, BC, DR, compliance, regulatory or other reasons. I wonder how much garbage data is really out there and many garbage backups, snapshots, replication or other copies of data exist? Sounds like a good reason to modernize data protection.

    If we don’t know where the garbage data is, how can we know if there is a garbage copy of the data for protection on some other tape, disk or cloud. That also means plenty of garbage data to compact (e.g. compress and dedupe) to cut its data footprint impact particular with tough economic times.

    Does this mean then that the cloud is the new destination for garbage data in different shapes or forms, from online primary to back up and archive?

    Does that then make the cloud the new virtual garbage dump for big and little data?

    Hmm, I think I need to empty my desktop trash bin and email deleted items among other digital house keeping chores now.

    On the other hand, just had a thought about orphaned data and orphaned storage, however lets leave those sleeping dogs lay where they rest for now.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    SSD, flash and DRAM, DejaVu or something new?

    StorageIO industry trends cloud, virtualization and big data

    Recently I was in Europe for a couple of weeks including stops at Storage Networking World (SNW) Europe in Frankfurt, StorageExpo Holland, Ceph Day in Amsterdam (object and cloud storage), and Nijkerk where I delivered two separate 2 day, and a single 1 day seminar.

    Image of Frankfurt transtationImage of inside front of ICE train going from Frankfurt to Utrecht

    At the recent StorageExpo Holland event in Utrecht, I gave a couple of presentations, one on cloud, virtualization and storage networking trends, the other taking a deeper look at Solid State Devices (SSD’s). As in the past, StorageExpo Holland was great in a fantastic venue, with many large exhibits and great attendance which I heard was over 6,000 people over two days (excluding exhibitor vendors, vars, analysts, press and bloggers) which was several times larger than what was seen in Frankfurt at the SNW event.

    Image of Ilja Coolen (twitter @@iCoolen) who was session host for SSD presentation in UtrechtImage of StorageExpo Holland exhibit show floor in Utrecht

    Both presentations were very well attended and included lively interactive discussion during and after the sessions. The theme of my second talk was SSD, the question is not if, rather what to use where, how and when which brings us up to this post.

    For those who have been around or using SSD for more than a decade outside of cell phones, camera, SD cards or USB thumb drives, that probably means DRAM based with some form of data persistency mechanisms. More recently mention SSD and that implies nand flash-based, either MLC or eMLC or SLC or perhaps emerging mram or PCM. Some might even think of NVRAM or other forms of SSD including emerging mram or mem-resistors among others, however lets stick to nand flash and dram for now.

    image of ssd technology evolution

    Often in technology what is old can be new, what is new can be seen as old, if you have seen, experienced or done something before you will have a sense of DejaVu and it might be evolutionary. On the other hand, if you have not seen, heard, experienced, or found a new audience, then it can be  revolutionary or maybe even an industry first ;).

    Technology evolves, gets improved on, matures, and can often go in cycles of adoption, deployment, refinement, retirement, and so forth. SSD in general has been an on again, off again type cycle technology for the past several decades except for the past six to seven years. Normally there is an up cycle tied to different events, servers not being fast enough or affordable so use SSD to help address performance woes, or drives and storage systems not being fast enough and so forth.

    Btw, for those of you who think that the current SSD focused technology (nand flash) is new, it is in fact 25 years old and still evolving and far from reaching its full potential in terms of customer deployment opportunities.

    StorageIO industry trends cloud, virtualization and big data

    Nand flash memory has helped keep SSD practical for the past several years riding the similar curve that is keeping hard disk drives (HDD’s) that they were supposed  to replace alive. That is improved reliability, endurance or duty cycle, better annual failure rate (AFR), larger space capacity, lower cost, and enhanced interfaces, packaging, power and functionality.

    Where SSD can be used and options

    DRAM historically at least for enterprise has been the main option for SSD based solutions using some form of data persistency. Data persistency options include battery backup combined with internal HDD’s to de stage information from the DRAM before power was lost. TMS (recently bought by IBM) was one of the early SSD vendors from the DRAM era that made the transition to flash including being one of the first many years ago to combine DRAM as a cache layer over nand flash as a persistency or de-stage layer. This would be an example of if you were not familiar with TMS back then and their capacities, you might think or believe that some more recent introductions are new and revolutionary, and perhaps they are in their own right or with enough caveats and qualifiers.

    An emerging trend, which for some will be Dejavu, is that of using more DRAM in combination with nand flash SSD.

    Oracle is one example of a vendor who IMHO rather quietly (intentionally or accidentally) has done this in the 7000 series storage systems as well as ExaData based database storage systems. Rest assured they are not alone and in fact many of the legacy large storage vendors have also piled up large amounts of DRAM based cache in their storage systems. For example EMC with 2TByte of DRAM cache in their VMAX 40K, or similar systems from Fujitsu HP, HDS, IBM and NetApp (including recent acquisition of DRAM based CacheIQ) among others. This has also prompted the question of if SSD has been successful in traditional storage arrays, systems or appliances as some would have you believe not, click here to learn more and cast your vote.

    SSD, IO, memory and storage hirearchy

    So is the future in the past? Some would say no, some will say yes, however IMHO there are lessons to learn and leverage from the past while looking and moving forward.

    Early SSD’s were essentially RAM disks, that is a portion of main random access memory (RAM) or what we now call DRAM set aside as a non persistent (unless battery backed up) cache or device. Using a device driver, applications could use the RAM disk as though it were a normal storage system. Different vendors springing up with drivers for various platforms and disappeared as their need were reduced with faster storage systems, interfaces and ram disks drives supplied by vendors, not to mention SSD devices.

    Oh, for you tech trivia types, there was also database machines from the late 80’s such as Briton Lee that would offload your database processing functions to a specialized appliance. Sound like Oracle ExaData  I, II or III to anybody?

    Image of Oracle ExaData storage system

    Ok, so we have seen this movie before, no worries, old movies or shows get remade, and unless you are nostalgic or cling to the past, sure some of the remakes are duds, however many can be quite good.

    Same goes with the remake of some of what we are seeing now. Sure there is a generation that does not know nor care about the past, its full speed ahead and leverage what will get them there.

    Thus we are seeing in memory databases again, some of you may remember the original series (pick your generation, platform, tool and technology) with each variation getting better. With 64 bit processor, 128 bit and beyond file system and addressing, not to mention ability for more DRAM to be accessed directly, or via memory address extension, combined with memory data footprint reduction or compression, there is more space to put things (e.g. no such thing as a data or information recession).

    Lets also keep in mind that the best IO is the IO that you do not have to do, and that SSD which is an extension of the memory map plays by the same rules of real estate. That is location matters.

    Thus, here we go again for some of you (DejaVu), while for others get ready for a new and exciting ride (new and revolutionary). We are back to the future with in memory database which while for a time will take some pressure from underlying IO systems until they once again out grow server memory addressing limits (or IT budgets).

    However for those who do not fall into a false sense of security, no fear, as there is no such thing as a data or information recession. Sure as the sun rises in the east and sets in the west, sooner or later those IO’s that were or are being kept in memory will need to be de-staged to persistent storage, either nand flash SSD, HDD or somewhere down the road PCM, mram and more.

    StorageIO industry trends cloud, virtualization and big data

    There is another trend that with more IOs being cached, reads are moving to where they should resolve which is closer to the application or via higher up in the memory and IO pyramid or hierarchy (shown above).

    Thus, we could see a shift over time to more writes and ugly IOs being sent down to the storage systems. Keep in mind that any cache historically provides temporal relieve, question is how long of a temporal relief or until the next new and revolutionary or DejaVu technology shows up.

    Ok, go have fun now, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    IBM vs. Oracle, NAD intervenes, again

    StorageIO industry trends cloud, virtualization and big data

    With HP announcing that they were sold a bogus deal with Autonomy (read here, here and here among others) and the multi billion write off (loss), or speculation of who will be named the new CEO of Intel in 2013, don’t worry if you missed the latest in the ongoing IBM vs. Oracle campaign. The other day the NAD (National Advertising Directive) part of the Better Business Bureau (BBB) issued yet another statement about IBM and Oracle (read here and posted below).

    NAD BBB logo

    In case you had not heard, earlier this year, Oracle launched an advertising promotion touting how much faster their solutions are vs. IBM. Perhaps you even saw the advertising billboards along highways or in airports making the Oracle claims.

    Big Blue (e.g. IBM) being the giant that they are was not going take the Oracle challenge sitting down and stepped up and complained to the better business bureau (BBB). As a result, the NAD issued a decision for Oracle to stop the ads (read more here). Oracle at 37.1B (May 2012 annual earnings) is about a third the size of IBM at 106.9B (2011 earnings), thus neither is exactly a small business.

    Lets get back to the topic at hand the NAD issued yet another directive. In the latest spat, after the first Ads, Oracle launched the 10M challenge (you can read about that here).

    Oracle 10 million dollar challenge ad image

    Once again the BBB and the NAD weighs in for IBM and issued the following statement (mentioned above):

    For Immediate Release
    Contact: Linda Bean
    212.705.0129

    NAD Determines Oracle Acted Properly in Discontinuing Performance Claim Couched in ‘Contest’ Language

    New York, NY – Nov. 20, 2012 – The National Advertising Division has determined that Oracle Corporation took necessary action in discontinuing advertising that stated its Exadata server is “5x Faster Than IBM … Or you win $10,000,000.”

    The claim, which appeared in print advertising in the Wall Street Journal and other major newspapers, was challenged before NAD by International Business Machines Corporation.

    NAD is an investigative unit of the advertising industry system of self-regulation and is administered by the Council of Better Business Bureaus.

    As an initial matter, NAD considered whether or not Oracle’s advertisement conveyed a comparative performance claim – or whether the advertisement simply described a contest.

    In an NAD proceeding, the advertiser is obligated to support all reasonable interpretations of its advertising claims, not just the message it intended to convey. In the absence of reliable consumer perception evidence, NAD uses its judgment to determine what implied messages, if any, are conveyed by an advertisement.

    Here, NAD found that, even accounting for a sophisticated target audience, a consumer would be reasonable to take away the message that all Oracle Exadata systems run five times as fast as all IBM’s Power computer products. NAD noted in its decision that the fact that the claim was made in the context of a contest announcement did not excuse the advertiser from its obligation to provide substantiation.

    The advertiser did not provide any speed performance tests, examples of comparative system speed superiority or any other data to substantiate the message that its Exadata computer systems run data warehouses five times as fast as IBM Power computer systems.

    Accordingly, NAD determined that the advertiser’s decision to permanently discontinue this advertisement was necessary and appropriate. Further, to the extent that Oracle reserves the right to publish similar advertisements in the future, NAD cautioned that such performance claims require evidentiary support whether or not the claims are couched in a contest announcement.

    Oracle, in its advertiser’s statement, said it disagreed with NAD’s findings, but would take “NAD’s concerns into account should it disseminate similar advertising in the future.”

    ###

    NAD’s inquiry was conducted under NAD/CARU/NARB Procedures for the Voluntary Self-Regulation of National Advertising. Details of the initial inquiry, NAD’s decision, and the advertiser’s response will be included in the next NAD/CARU Case Report.

    About Advertising Industry Self-Regulation: The Advertising Self-Regulatory Council establishes the policies and procedures for advertising industry self-regulation, including the National Advertising Division (NAD), Children’s Advertising Review Unit (CARU), National Advertising Review Board (NARB), Electronic Retailing Self-Regulation Program (ERSP) and Online Interest-Based Advertising Accountability Program (Accountability Program.) The self-regulatory system is administered by the Council of Better Business Bureaus.

    Self-regulation is good for consumers. The self-regulatory system monitors the marketplace, holds advertisers responsible for their claims and practices and tracks emerging issues and trends. Self-regulation is good for advertisers. Rigorous review serves to encourage consumer trust; the self-regulatory system offers an expert, cost-efficient, meaningful alternative to litigation and provides a framework for the development of a self-regulatory to emerging issues.

    To learn more about supporting advertising industry self-regulation, please visit us at: www.asrcreviews.org.

    Linda Bean Director, Communications,
    Advertising Self-Regulatory Council

    Tel: 212.705.0129
    Cell: 908.812.8175
    lbean@asrc.bbb.org

    112 Madison Ave.
    3rd Fl.
    New York, NY
    10016

    Not surprisingly, IBM sent the following email to highlight their latest news:

    Greg,

    For the third time in eight months Oracle has agreed to kill a misleading advertisement targeting IBM after scrutiny from the Better Business Bureau’s National Advertising Division.

    Oracle’s ‘$10 Million Challenge’ ad claimed that its Exadata server was ‘Five Times Faster than IBM Power or You Win $10,000,000.’ The advertising council just issued a press release announcing that the claim was not supported by the evidence in the record, and that Oracle has agreed to stop making the claim. ‘[Oracle] did not provide speed performance tests, examples of comparative systems speed superiority or any other data to  substantiate its message,’ the BBB says in the release: The ads ran in The Wall Street Journal, The Economist, Chief Executive Magazine, trade publications and online.

    The National Advertising Division reached similar judgments against Oracle advertising on two previous occasions this year. Lofty and unsubstantiated claims about Oracle systems being ‘Twenty Times Faster than IBM’ and ‘Twice as Fast Running Java’ were both deemed to be unsubstantiated and misleading. Oracle quietly shelved both campaigns.

    If you follow Oracle’s history of claims, you won’t be surprised that the company issues misleading ads until they’re called out in public and forced to kill the campaign. As far back as 2001, Oracle’s favorite tactic has been to launch unsubstantiated attacks on competitors in ads while promising prize money to anyone who can disprove the bluff. Not surprisingly, no prize money is ever paid as the campaigns wither under scrutiny. They are designed to generate publicity for Oracle, nothing more. You may be familiar with their presentation, ‘Ridding the Market of Competition,’ which they issued to the Society of Competitive Intelligence Professionals laying out their strategy.

    The repeated rulings by the BBB even caused analyst Rob Enderle to comment that, ‘there have been significant forced retractions and it is also apparent that increasingly the only people who could cite these false Oracle performance advantages with a straight face were Oracle’s own executives, who either were too dumb to know they were false or too dishonest to care.’

    Let me know if you’re interested in following up on this news. You won’t hear anything about it from Oracle.

    Best,

    Chris

    Christopher Rubsamen
    Worldwide Communications for PureSystems and Cloud Computing
    IBM Systems & Technology Group
    aim: crubsamen
    twitter: @crubsamen

    Wow, I never knew however I should not be surprised that there is a Society of Competitive Intelligence Professionals.

    Now Oracle is what they are, aggressive and have a history of doing creative or innovative (e.g. stepping out-of-bounds) in sales and marketing campaigns, benchmarking and other activities. On the other hand has IBM been victimized at the hands of Oracle and thus having to resort to using the BBB and NAD as part of its new sales and marketing tool to counter Oracle?

    Does anybody think that the above will cause Oracle to retreat, repent, and tone down how they compete on the field of sales and marketing of servers, storage, database and related IT, ICT, big and little data, clouds?

    Anyone else have a visual of a group of IBMers sitting around a table at an exclusive country club enjoying a fine cigar along with glass of cognac toasting each other on their recent success in having the BBB and NAD issue another ruling against Oracle. Meanwhile perhaps at some left coast yacht club, the Oracle crew are high fiving, congratulating each other on their commission checks while spraying champagne all over the place like they just won the Americas cup race?

    How about it Oracle, IBM says Im not going to hear anything from you, is that true?

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved