Open Data Center Alliance (ODCA) BMW Private Cloud Strategy

Storage I/O cloud virtual and big data perspectives

If your organization like StorageIO is a member of the Open Data Center Alliance (ODCA) you may be aware of the resources they make available about cloud, virtualization, security and more. Unlike so many other industry associates or trade groups dominated by vendors, the ODCA has an IT or customer focus including member developed best practices, strategies and templates.

A good example is the recently released ODCA member BMW group private cloud strategy document.

This 24 page document covers BMW groups private cloud strategy that sets stage for phased future hybrid. By being a phased approach, it seems that BMW is leveraging and transitioning for the future while maintaining support for their current environment (including Windows-based) as part of a paradigm shift. This is refreshing and good to see how organizations are looking to use cloud as part of a paradigm or IT service deliver model and not just as a new technology or platform focus.

Topics covered include IaaS along with PaaS for DB, Web, SAP and CSaaS or Corporate Software as a Service based on the NIST cloud model. Also included are roles and integration of CMDB, ITSM, ITIL, orchestration in a business vs. technology driven model. Being business driven, that means there is a mission statement for the BMW cloud strategy, with objectives aligned to support organization enablement vs. using different tools, technologies or trends along with design criteria.

What I like about the BMW strategy is that it is aligned to support the business as opposed to finding ways to use technology to support the business, or justify why a cloud is needed. In other words, something different from those needing for a technology, tool, product, standard or service to be adopted.

Thus while having been a vendor, the ODCA customer focused angle appeals to me from when I was on that side of the table working in IT organizations. Otoh, for some of you reading through the BMW document might result in DejaVu from experiences of web-based, client-server, information utilities and other IT service delivery models or paradigms.

Learn more at the ODCA newsroom

If you have not done, check out and join the ODCA.

Ok nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Thanks for viewing StorageIO content and top 2012 viewed posts

StorageIO industry trends cloud, virtualization and big data

2012 was a busy year (it was our 7th year in business) along with plenty of activity on StorageIOblog.com as well as on the various syndicate and other sites that pickup our content feed (https://storageioblog.com/RSSfull.xml).

Excluding traditional media venues, columns, articles, web casts and web site visits (StorageIO.com and StorageIO.TV), StorageIO generated content including posts and pod casts have reached over 50,000 views per month (and growing) across StorageIOblog.com and our partner or syndicated sites. Including both public and private, there were about four dozen in-person events and activities not counting attending conferences or vendor briefing sessions, along with plenty of industry commentary. On the twitter front, plenty of activity there as well closing in on 7,000 followers.

Thank you to everyone who have visited the sites where you will find StorageIO generated content, along with industry trends and perspective comments, articles, tips, webinars, live in person events and other activities.

In terms of what was popular on the StorageIOblog.com site, here are the top 20 viewed posts in alphabetical order.

Amazon cloud storage options enhanced with Glacier
Announcing SAS SANs for Dummies book, LSI edition
Are large storage arrays dead at the hands of SSD?
AWS (Amazon) storage gateway, first, second and third impressions
EMC VFCache respinning SSD and intelligent caching
Hard product vs. soft product
How much SSD do you need vs. want?
Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go
Is SSD dead? No, however some vendors might be
IT and storage economics 101, supply and demand
More storage and IO metrics that matter
NAD recommends Oracle discontinue certain Exadata performance claims
New Seagate Momentus XT Hybrid drive (SSD and HDD)
PureSystems, something old, something new, something from big blue
Researchers and marketers dont agree on future of nand flash SSD
Should Everything Be Virtualized?
SSD, flash and DRAM, DejaVu or something new?
What is the best kind of IO? The one you do not have to do
Why FC and FCoE vendors get beat up over bandwidth?
Why SSD based arrays and storage appliances can be a good idea

Moving beyond the top twenty read posts on StorageIOblog.com site, the list quickly expands to include more popular posts around clouds, virtualization and data protection modernization (backup/restore, HA, BC, DR, archiving), general IT/ICT industry trends and related themes.

I would like to thank the current StorageIOblog.com site sponsors Solarwinds (management tools including response time monitoring for physical and virtual servers) and Veeam (VMware and Hyper-V virtual server backup and data protection management tools) for their support.

Thanks again to everyone for reading and following these and other posts as well as for your continued support, watch for more content on the above and other related and new topics or themes throughout 2013.

Btw, if you are into Facebook, you can give StorageIO a like at facebook.com/storageio (thanks in advance) along with viewing our newsletter here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Data Center Infrastructure Management (DCIM) and IRM

StorageIO industry trends cloud, virtualization and big data

There are many business drivers and technology reasons for adopting data center infrastructure management (DCIM) and infrastructure Resource Management (IRM) techniques, tools and best practices. Today’s agile data centers need updated management systems, tools, and best practices that allow organizations to plan, run at a low-cost, and analyze for workflow improvement. After all, there is no such thing as an information recession driving the need to move process and store more data. With budget and other constraints, organizations need to be able to stretch available resources further while reducing costs including for physical space and energy consumption.

The business value proposition of DCIM and IRM includes:

DCIM, Data Center, Cloud and storage management figure

Data Center Infrastructure Management or DCIM also known as IRM has as their names describe a focus around management resources in the data center or information factory. IT resources include physical floor and cabinet space, power and cooling, networks and cabling, physical (and virtual) servers and storage, other hardware and software management tools. For some organizations, DCIM will have a more facilities oriented view focusing on physical floor space, power and cooling. Other organizations will have a converged view crossing hardware, software, facilities along with how those are used to effectively deliver information services in a cost-effective way.

Common to all DCIM and IRM practices are metrics and measurements along with other related information of available resources for gaining situational awareness. Situational awareness enables visibility into what resources exist, how they are configured and being used, by what applications, their performance, availability, capacity and economic effectiveness (PACE) to deliver a given level of service. In other words, DCIM enabled with metrics and measurements that matter allow you to avoid flying blind to make prompt and effective decisions.

DCIM, Data Center and Cloud Metrics Figure

DCIM comprises the following:

  • Facilities, power (primary and standby, distribution), cooling, floor space
  • Resource planning, management, asset and resource tracking
  • Hardware (servers, storage, networking)
  • Software (virtualization, operating systems, applications, tools)
  • People, processes, policies and best practices for management operations
  • Metrics and measurements for analytics and insight (situational awareness)

The evolving DCIM model is around elasticity, multi-tenant, scalability, flexibility, and is metered and service-oriented. Service-oriented, means a combination of being able to rapidly give new services while keeping customer experience and satisfaction in mind. Also part of being focused on the customer is to enable organizations to be competitive with outside service offerings while focusing on being more productive and economic efficient.

DCIM, Data Center and Cloud E2E management figure

While specific technology domain areas or groups may be focused on their respective areas, interdependencies across IT resource areas are a matter of fact for efficient virtual data centers. For example, provisioning a virtual server relies on configuration and security of the virtual environment, physical servers, storage and networks along with associated software and facility related resources.

You can read more about DCIM, ITSM and IRM in this white paper that I did, as well as in my books Cloud and Virtual Data Storage Networking (CRC Press) and The Green and Virtual Data Center (CRC Press).

Ok, nuff said, for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Dell is buying Quest software, not the phone company Qwest

Dell Storage Customer Advisory Panel (CAP)

For those not familiar with Quest, they are a software company not to be confused with the telephone communications company formerly known as Qwest (aka now known as centurylink).

Both Dell and Quest have been on software related acquisition initiatives that past few years with Quest having purchased vKernel, Vizoncore (vRanger virtualization backup), BakBone (who had acquire Alavarii and Asempra) for traditional backup and data protection among others. Not to be out done, as well as purchasing Quest, Dell has also more recently bought Appassure (Disclosure: StorageIOblog site sponsor) for data protection, Sonicwall and Wyse in addition to some other recent purchases (ASAP, Boomi, Compellent, Exanet, EqualLogic, Force10, InsightOne, KACE, Ocarina, Perot, RNA and Scalent among others).

What does this mean?
Dell is expanding the scope of their business with more products (hardware, software), solution bundles, services and channel partnering opportunities Some of the software tools and focus areas that Quest brings to the Dell table or portfolio include:

Database management (Oracle, SQLserver)
Data protection (virtual and physical backup, replication, bc, dr)
Performance monitoring (DCIM and IRM) of applications and infrastructure
User workspace management (application delivery)
Windows server management (migrate and manage, AD, exchange, sharepoint)
Identify and access management (security, compliance, privacy)

What does Dell get by spending over $2B USD on quest?

  • Additional software titles or product
  • More software developers for their Software group
  • Sales people to help promote, partner and sell software solutions
  • Create demand pull for other Dell products and services via software
  • Increase its partner reach via existing Quest VARs and business partners
  • Extend the size of the Dell software and intellectual property (IP) portfolio
  • New revenue streams that compliment existing products and lines of business
  • Potential for better rate of return on some of its $12B USD in cash or equivalence

    Is this a good move for Dell?
    Yes for the above reasons

  • Is there a warning to this for Dell?
    Yes, they need to execute, keep the Quest team focused along with their other teams on the respective partners, products and market opportunities while expanding into new areas. Dell needs to also leverage Quest to further its cause in creating trust, confidence and strategic relationships with channel partners to reach new markets in different geographies. In addition, Dell needs to articulate its strategy and positioning of the various solutions to avoid products being perceived as competing vs. complimenting each other.

    Additional Dell related links:
    Dell Storage Customer Advisory Panel (CAP)
    Dell Storage Forum 2011 revisited
    Dude, is Dell doing a disk deal again with Compellent?
    Data footprint reduction (Part 2): Dell, IBM, Ocarina and Storwize
    Post Holiday IT Shopping Bargains, Dell Buying Exanet?
    Dell Will Buy Someone, However Not Brocade (At least for now)

    Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Spring (May) 2012 StorageIO news letter

    StorageIO News Letter Image
    Spring (May) 2012 News letter

    Welcome to the Spring (May) 2012 edition of the Server and StorageIO Group (StorageIO) news letter. This follows the Fall (December) 2011 edition.

    You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

    Click on the following links to view the Spring May 2012 edition as an HTML or PDF or, to go to the news letter page to view previous editions.

    You can subscribe to the news letter by clicking here.

    Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

    Nuff said for now

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Part IV: PureSystems, something old, something new, something from big blue

    This is the fourth in a five-part series around the recent IBM PureSystems announcements. You can view the earlier post here, and the next post here.

    So what does this mean for IBM Business Partners (BPs) and ISVs?
    What could very well differentiate IBM PureSystems from those of other competitors is to take what their partner NetApp has done with FlexPods combing third-party applications from Microsoft and SAP among others and take it to the next level. Similar to what helped make EMC Centera a success (or at least sell a lot of them) was inclusion and leveraging third-party ISVs and BPs  to add value. Compared to other vendors with object based or content accessible storage (CAS) or online archive platforms that focused on the technology feature, function speeds and feeds, EMC realized the key was getting ISVs to support so that BPs and their own direct sales force could sell the solution.

    With PureSystems, IBM is revisiting what they have done in the past which if offer bundled solutions providing incentives for ISVs to support and BPs to sell the IBM brand solution. EMC took an early step with including VMware with their Vblock combing server, storage, networking and software with NetApp taking the next step adding SAP, Microsoft and other applications. Dell, HP, Oracle and others are following suit so it only makes sense that IBM returns to its roots leveraging its DNA to reach out and get their ISVs who are now, have been in the past, or are new opportunities to be on board.

    IBM is throwing its resources including their innovation centers for training around the world where business partners can get the knowledge and technical support they need. In other words, workshops or seminars on how to sell deploy and setting up of these systems, application and customer testing or proof of concepts and things one would expect out of IBM for such an initiative. In addition to technology and sales training along with marketing support, IBM is making their financing capabilities available to help customers as well as offer incentives to their business partners to simplify acquisitions.

    So what buzzword bingo topics and themes did IBM address with this announcement:
    IBM did a fantastic job in terms of knocking the ball out of the park with this announcement pertaining buzzword bingo and deserves an atta boy or atta girl!

    So what about how this will affect sales of Bladecenters  or other systems?
    If all IBM and their BPs do are, encroach on existing systems sales to circle the wagons and protect the installed base, which would be one thing. However if IBM and their BPs can use the new packaging and model approach to reestablish customers and partnerships, or open and expand into new adjacent markets, then the net differences should be more Bladecenters (excuse me, PureFlex) being sold.

    So what will this cost?
    IBM is citing entry PureSystems Express models starting at around $100,000 USD for base systems with others starting at around $200,000 and $300,000 expandable into larger configurations and budgets. Note that like airlines that advertise a low airfare and then you get to pay extra for peanuts, drinks, extra bag space, changes to reservations and so forth, look at these and related systems not just for the first starting price, also for expansion costs over different time periods. Contact IBM, your BP or ISV to find out what one of these systems will do for and cost you.

    So what about VARs and IBM business partners (BPs)?
    This could be a boon for those BPs and ISVs  that had previously sold their software solutions bundled with IBM hardware platforms who were being challenged by other converged solution stacks or were being forced to unbundled. This will also allow those business partners to compete on par with other converged solutions or continue selling the pieces of what they are familiar with however under a new umbrellas. Of course, pricing will be a focus and concern for some who will want to see what added value exists vs. acquiring the various components. This also means that IBM will have to make incentives available for their partners to make a living while also allowing their customers to afford solutions and maximize their return on innovation (the new ROI) and enablement.

    Click here to view the next post in this series, ok nuff said for now.

    Here are some links to learn more:
    Various IBM Redbooks and related content
    The blame game: Does cloud storage result in data loss?
    What do you need when its time to buy a new server?
    2012 industry trends perspectives and commentary (predictions)
    Convergence: People, Processes, Policies and Products
    Buzzword Bingo and Acronym Update V2.011
    The function of XaaS(X) Pick a letter
    Hard product vs. soft product
    Buzzword Bingo and Acronym Update V2.011
    Part I: PureSystems, something old, something new, something from big blue
    Part II: PureSystems, something old, something new, something from big blue
    Part III: PureSystems, something old, something new, something from big blue
    Part IV: PureSystems, something old, something new, something from big blue
    Part V: PureSystems, something old, something new, something from big blue
    Cloud and Virtual Data Storage Networking

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Part V: PureSystems, something old, something new, something from big blue

    This is the fifth in a five-part series around the recent IBM PureSystems announcements. You can view the earlier post here.

    So what about vendor or technology lock in?
    So who is responsible for vendor or technology lock in? When I was working in IT organizations, (e.g. what vendors call the customer) the thinking was vendors are responsible for lock in. Later when I worked for different vendors (manufactures and VARs) the thinking was lock in is what was caused by the competition. More recently I’m of the mind set that vendor lock in is a shared responsibility issue and topic. I’m sure some marketing wiz or sales type will be happy to explain the subtle differences of how their solution does not cause lock in.

    Vendor lock in can be a shared responsibility. Generally speaking, lock in, stickiness and account control are essentially the same, or at least strive to get similar results. For example, vendor lock in too some has a negative stigma. However vendor stickiness may be a new term, perhaps even sounding cool thus it is not a concern. Remember the Mary Poppins song a spoon full of sugar makes the medicine go down? In other words, sometimes changing and using a different term such as sticky vs. vendor lock in helps make the situation taste better.

    So what should you do?
    Take a closer look if you are considering converged infrastructures, cloud or data centers in a box, turnkey application or information services deployment platforms. Likewise, if you are looking at specific technologies such as those from Cisco UCS, Dell vStart, EMC Vblock (or via VCE), HP, NetApp FlexPod or Oracle (ExaLogic, ExaData, etc) among others, also check out the IBM PureSystems (Flex and PureApplication). Compare and contrast these converged solutions with your traditional procurement and deployment modes including cost of acquiring hardware, software, ongoing maintenance or service fees along with value or benefit of bundled tools. There may be a higher cost for converged systems in some scenarios, however compare on the value and benefit derived vs. doing the integration yourself.

    Compare and contrast how converged solutions enable, however also consider what constraints exists in terms of flexibility to reconfigure in the future or make other changes. For example as part of integration, does a solution take a lowest common denominator approach to software and firmware revisions for compatibility that may lag behind what you can apply to standalone components. Also, compare and contrast various reference architectures with different solution bundles or packages.

    Most importantly compare and evaluate the solutions on their ability to meet and exceed your base requirements while adding value and enabling return on innovation while also being cost-effective. Do not be scared of these bundled solutions; however do your homework to make informed decisions including overcoming any concerns of lock in or future costs and fees. While these types of solutions are cool or interesting from a technology perspective and can streamline acquisition and deployment, make sure that there is a business benefit that can be addressed as well as enablement of new capabilities.

    So what does this all mean?
    Congratulations to IBM with their PureSystems for leveraging their DNA and roots bundling what had been unbundled before cloud and stacks were popular and trendy. IBM has done a good job of talking vision and strategy along lines of converged and dynamic, elastic and smart, clouds and other themes for past couple of years while selling the pieces as parts of solutions or ala carte or packaged by their ISVs and business partners.

    What will be interesting to see is if bladecenter customers shift to buying PureFlex, which should be an immediate boost to give proof points of adoption, while essentially up selling what was previously available. However, more interesting will be to see if net overall new customers and footprints are sold as opposed to simply selling a newer and enhanced version of previous components.

    In other words will IBM be able to keep up their focus and execution where they have sold the previous available components, while also holding onto current ISV and BP footprint sales and perhaps enabling those partners to recapture some hardware and solution sales that had been unbundled (e.g. ISV software sold separate of IBM platforms) and move into new adjacent markets.

    Here are some links to learn more:
    Various IBM Redbooks and related content
    The blame game: Does cloud storage result in data loss?
    What do you need when its time to buy a new server?
    2012 industry trends perspectives and commentary (predictions)
    Convergence: People, Processes, Policies and Products
    Buzzword Bingo and Acronym Update V2.011
    The function of XaaS(X) Pick a letter
    Hard product vs. soft product
    Buzzword Bingo and Acronym Update V2.011
    Part I: PureSystems, something old, something new, something from big blue
    Part II: PureSystems, something old, something new, something from big blue
    Part III: PureSystems, something old, something new, something from big blue
    Part IV: PureSystems, something old, something new, something from big blue
    Part V: PureSystems, something old, something new, something from big blue
    Cloud and Virtual Data Storage Networking

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Here are some links to learn more:
    Various IBM Redbooks and related content
    The blame game: Does cloud storage result in data loss?
    What do you need when its time to buy a new server?
    2012 industry trends perspectives and commentary (predictions)
    Convergence: People, Processes, Policies and Products
    Buzzword Bingo and Acronym Update V2.011
    The function of XaaS(X) – Pick a letter
    Hard product vs. soft product
    Buzzword Bingo and Acronym Update V2.011
    Part I: PureSystems, something old, something new, something from big blue
    Part II: PureSystems, something old, something new, something from big blue
    Part III: PureSystems, something old, something new, something from big blue
    Part IV: PureSystems, something old, something new, something from big blue
    Part V: PureSystems, something old, something new, something from big blue
    Cloud and Virtual Data Storage Networking

    Ok, so what is next, lets see how this unfolds for IBM and their partners.

    Nuff said for now.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Part III: PureSystems, something old, something new, something from big blue

    This is the third in a five-part series around the recent IBM PureSystems announcements. You can view the earlier post here, and the next post here.

    So what about the IBM Virtual Appliance Factory?
    Where PureFlex and PureApplication (PureSystems) are the platforms or vehicles for enabling your journey to efficient and effective information services delivery, and PureSystem centre (or center for those of you in the US) is the portal or information center, the IBM Virtual Appliance Factory (VAF) is a collection of tools, technologies, processes and methodologies. The VAF  helps developers or ISVs to prepackage applications or solutions for deployment into Kernel Virtual Machine (KVM) on Intel and IBM PowerVM  virtualized environments that are also supported by PureFlex and PureApplication  systems.

    VAF technologies include Distributed Management Task Force (DMTF) Open Virtual Alliance (OVA) Open Virtualization Format (OVF) along with other tools for combing operating systems (OS), middleware and solution software into a delivery package or a virtual appliance that can be deployed into cloud and virtualized environments. Benefits include reducing complexity of working logical partions (LPAR) and VM configuration, abstraction and portability for deployment or movement from private to public environments. Net result should be less complexity lowering costs while reducing mean time to install and deploy. Here is a link to learn more about VAF and its capabilities and how to get started.

    So what does cloud ready mean?
    IBM is touting cloud ready capability in the context of rapid out of the box, ease of deployment and use as well as easy to acquire. This is in line with what others are doing with converged server, storage, networking, hardware, software and hypervisor solutions. IBM is also touting that they are using the same public available products as what they use in their own public services SmartCloud offerings.

    So what is scale in vs. scale up, scale out or scale within?
    Traditional thinking is that scaling refers to increasing capacity. Scaling also means increasing performance, availability, functionality with stability. Scaling with stability means that as performance, availability, capacity or other features are increased problems are not introduced or complexity is not increased. For example, scaling with stability for performance should not result in loss of availability or capacity, capacity increase should not be at the cost of performance or availability, should not cost performance or capacity and management tools should work for you, instead of you working for them.

    Scaling up and scaling out have been used to describe scaling performance, availability, capacity and other attributes beyond the limits of a single system, box or cabinet. For example clustered, cloud, grid and other approaches refer to scaling out or horizontally across different physical resources. Scaling up or scaling vertically means scaling within in a system using faster, denser technologies doing more in the same footprint. HDS announced a while back what they refer to 3D scaling which embraces the above notions of scaling up, out and within across different dimensions. IBM is building on that by emphasizing scaling leveraging faster, denser components such as Power7 and Intel processors to scale within the box or system or node, which can also be scaled out using enhanced networking from IBM and their partners.

    So what about backup/restore, BC, DR and general data protection?
    I would expect IBM to step up and talk about how they can leverage their data protection and associated management toolsets, technologies and products. IBM has the components (hardware, software) already for backup/restore, BC, DR, data protection and security along with associated service offerings. One would expect IBM to not only come out with a backup, restore, BC, DR and archiving solution or version, as well as ones for archiving or data preservation, compliance appliance variants as well as related themes. We know that IBM has the pieces, people, process and practices, let us see if IBM has learned from their competitors who may have missed data protection messaging opportunities. Sometimes what is assumed to be understood does not get discussed, however often what is assumed and is not understood should be discussed, hence, let us see if IBM does more than say oh yes, we have those capabilities and products too.

    So what do these have compared to others who are doing similar things?
    Different vendors have taken various approaches for bringing converged products or solutions to the market place. Not surprising, storage centric vendors EMC and NetApp have partnered with Cisco for servers (compute). Where Cisco was known for networking having more recently moved into compute servers, EMC and NetApp are known for storage and moving into converged space with servers. Since EMC and NetApp often compete with storage solutions offerings from traditional server vendors Dell, HP, IBM and Oracle among others, and now Cisco is also competing with those same server vendors it has previously partnered with for networking thus it makes sense for Cisco, EMC and NetApp to partner.

    While EMC owns a large share of VMware, they do also support Microsoft and other partners including Citrix. NetApp followed EMC into the converged space partnering with Cisco for compute and networking adding their own storage along with supporting hypervisors from Citrix, Microsoft and VMware along with third-party ISVs including Microsoft and SAP among others. Dell has evolved from reference architectures to products called vStart that leverage their own technologies along with those of partners.

    A challenge for Dell however is that vStart  sounds more like a service offering as opposed to a product that they or their VARs and business partners can sell and add value around. HP is also in the converged game as is Oracle among others. With PureSystems IBM is building on what their competitors and in some cases partners are doing by adding and messaging more around the many ISVs and applications that are part of the PureSystems initiative. Rest assured, there is more to PureSystems than simply some new marketing, press releases, videos and talking about partners and ISVs. The following table provides a basic high level comparison of what different vendors are doing or working towards and is not intended to be a comprehensive review.

    Who

    What

    Server

    Storage

    Network

    Software

    Other comments

    Cisco

    UCS

    Cisco

    Partner

    Cisco

    Cisco and Partners

    Various hypervisors and OS

    Dell

    vStart

    Dell

    Dell

    Dell and Partners

    Dell and partners

    Various hypervisors, OS and bundles

    EMC
    VCE

    Vblock VSPEX

    Cisco

    EMC

    Cisco and partners

    EMC, Cisco and partners

    Various hypervisors, OS and bundles, VSPEX adds more partner solution bundles

    HP

    Converged

    HP

    HP

    HP and partners

    HP and partners

    Various hypervisors, OS and bundles

    IBM

    PureFlex

    IBM

    IBM

    IBM and partners

    IBM and partners

    Various hypervisors, OS and bundles adding more ISV partners

    NetApp

    FlexPod

    Cisco

    NetApp

    Cisco and partners

    NetApp, Cisco and partners

    Various hypervisors, OS and bundles for SAP, Microsoft among others

    Oracle

    ExaLogic (Exadata  database)

    Oracle

    Oracle

    Partners

    Oracle and partners

    Various Oracle software tools and technologies

    So what took IBM so long compared to others?
    Good question, what is the saying? Rome was not built-in a day!

    Click here to view the next post in this series, ok, nuff said for now.

    Here are some links to learn more:
    Various IBM Redbooks and related content
    The blame game: Does cloud storage result in data loss?
    What do you need when its time to buy a new server?
    2012 industry trends perspectives and commentary (predictions)
    Convergence: People, Processes, Policies and Products
    Buzzword Bingo and Acronym Update V2.011
    The function of XaaS(X) Pick a letter
    Hard product vs. soft product
    Buzzword Bingo and Acronym Update V2.011
    Part I: PureSystems, something old, something new, something from big blue
    Part II: PureSystems, something old, something new, something from big blue
    Part III: PureSystems, something old, something new, something from big blue
    Part IV: PureSystems, something old, something new, something from big blue
    Part V: PureSystems, something old, something new, something from big blue
    Cloud and Virtual Data Storage Networking

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Part II: PureSystems, something old, something new, something from big blue

    This is the second in a five-part series around the recent IBM PureSystems announcements. You can view the earlier post here, and the next post here.

    So what are the speeds and feeds of a PureFlex system?
    The components that make up the PureFlex line include:

    • IBM management node (server with management software tools).
    • 10Gb Ethernet (LAN) switch, adapters and associated cabling.
    • IBM V7000 virtual storage (also see here and here).
    • Dual 8GFC (8Gb Fibre Channel) SAN switches and adapters.
    • Servers with either x86 xSeries using for example Intel Sandy Bridge EP 2.6 GHz 8 core processors, or IBMs Power7 based pSeries for AIX. Note that IBM with their blade center systems (now rebadged as part of being PureSystems) support various IO and networking interfaces include SAS, Ethernet, Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and InfiniBand using adapters and switches from various partners.
    • Virtual machine (VM) hypervisors such as Microsoft Hyper V and VMware vSphere/ESX among others. In addition to x86 based hypervisors or kernel virtual machines (KVM), IBM also supports its own virtual technology found in Power7 based systems. Check IBM support matrix for specific configurations and current offerings.
    • Optional middleware such as IBM WebSphere.

    Read more speeds and feeds at the various IBM sites including on Tony Pearson’s blog site.

    So what is IBM PureApplication System?
    This builds off and on PureFlex systems as a foundation for deploying various software stacks to deliver traditional IT applications or cloud Platform as a Service (PaaS) or Software as a Service (SaaS) and Application as a Service (AaaS) models. For example cloud or web stacks, java, database, analytics or other applications with buzzwords of elastic, scalable, repeatable, self-service, rapid provisioning, resilient, multi tenant and secure among others. Note that if are playing or into Buzzword bingo, go ahead and say Bingo when you are ready as IBM has a winner in this category.

    So what is the difference between PureFlex and PureApplication systems?
    PureApplication systems leverage PureFlex technologies adding extra tools and functionality for cloud like application functionality delivery.

    So what is IBM PureSystems Centre?
    It is a portal or central place where IBM and their business partner solutions pertaining to PureApplication and PureFlex systems can be accessed for including information for first installation support along with maintenance and upgrades. At launch, IBM is touting more than 150 solutions or applications that are available or qualified for deployment on PureApplication and PureFlex systems. In addition, IBM Patterns (aka templates) can also be accessed via this venue. Examples of application or independent software vendor (ISV) developed solutions for banking, education, financial, government, healthcare and insurance can be found at the PureSystems Centre portal (here, here and here).

    So what part of this is a service and what is a product?
    Other than the PureSystem center, which is a web portal for accessing information and technologies, PureFlex and PureApplication along with Virtual Appliance Factory are products or solutions that can be bought from IBM or their business partners. In addition, IBM business partners or third parties can also use these solutions housed in their own, a customer, or third-party facility for delivering managed service provided (MSP) capabilities, along with other PaaS and SaaS or AaaS type functionalities. In other words, these solutions can be bought or leased by IT and other organizations for their own use in a traditional IT deployment model, private, hybrid or public cloud model.

    Another option is for service providers to acquire these solutions for use in developing and delivering their own public and private or hybrid services. IBM is providing the hard product (hardware and software) that enables your return on innovation (the new ROI) to create and deliver your own soft product (services and experiences) consumed by those who use those capabilities. In addition to traditional financial quantitative return on investment (traditional ROI) and total cost of ownership (TCO), the new ROI complements those by adding a qualitative aspect. Your return on innovation will be dependent on what you are capable of doing that enables your customers or clients to be productive or creative. For example enabling your customers or clients to boost productivity, remove complexity and cost while maintaining or enhancing Quality of Service (QoS), service level objectives (SLOs) and service level agreements (SLAs) in addition to supporting growth by using a given set of hard products. Thus, your soft product is a function of your return on innovation and vise versa.

    Note that in this context, not to be confused with hardware and software, hard product are those technologies including hardware, software and services that are obtained and deployed as a soft product. A soft product in this context does not refer to software, rather the combination of hard products plus your own developed or separately obtained software and tools along with best practices and usage models. Thus, two organizations can use the same hard products and deliver separate soft products with different attributes and characteristics including cost, flexibility and customer experience.

    So what is a Pattern of Expertise?
    Combines operational know how experience and knowledge about common infrastructure resource management (IRM), data center infrastructure management (DCIM) and other commonly repeatable related process, practices and workflows including provisioning. Common patterns of activity and expertise for routine or other time-consuming tasks, which some might refer to as templates or workflows enable policy driven based automation. For example, IBM cites recurring time-consuming tasks that lend themselves to being automated such as provisioning, configuration, and upgrades and associated IRM, DCIM and data protection, storage and application management activities. Automation software tools are included as part of the PureSystems with patterns being downloadable as packages for common tasks and applications found at the IBM PureSystem center.

    At announcement, there are three types or categories of patterns:

    • IBM patterns: Factory created and supplied with the systems based on experiences IBM has derived from various managers, engineers and technologist for automating common tasks including configuration, deployment and application upgrades and maintenance. The aim is to cut the amount of time and intervention for deployment of applications and other common functions enabling IT staff to be more productive and address other needs.
    • ISV patterns: These leverage experience and knowledge from ISVs partnered with IBM, which at time of launch numbers over 125 vendors offering certified PureSystems Ready applications. The benefit and objective are to cut the time and complexity associated with procuring (e.g. purchasing), deploying and managing third-party ISV software. Downloadable patterns packages can be found at the IBM PureSystem center.
    • Customer patterns: Enables customers to collect and package their own knowledge, processes, rules, policies and best practices into patterns for automation. In addition to collecting knowledge for acquisition, configuration, day to day management and troubleshooting, these patterns can facility automation of tasks to ease on boarding of new staff employees or contractors. In addition, these patterns or templates capture workflows for automation enabling shorter deployment times of systems and applications into locations where skill sets do not exist.

    Here is a link to some additional information about patterns on the IBM developerWorks site.

    Click here to view the next post in this series, ok, nuff said for now.

    Here are some links to learn more:
    Various IBM Redbooks and related content
    The blame game: Does cloud storage result in data loss?
    What do you need when its time to buy a new server?
    2012 industry trends perspectives and commentary (predictions)
    Convergence: People, Processes, Policies and Products
    Buzzword Bingo and Acronym Update V2.011
    The function of XaaS(X) Pick a letter
    Hard product vs. soft product
    Buzzword Bingo and Acronym Update V2.011
    Part I: PureSystems, something old, something new, something from big blue
    Part II: PureSystems, something old, something new, something from big blue
    Part III: PureSystems, something old, something new, something from big blue
    Part IV: PureSystems, something old, something new, something from big blue
    Part V: PureSystems, something old, something new, something from big blue
    Cloud and Virtual Data Storage Networking

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Part I: PureSystems, something old, something new, something from big blue

    This is the first in a five-part series around the recent IBM PureSystems announcements. You can view the next post here.

    For a certain generation of IBM faithful or followers the recently announced PureFlex and PureApplication systems might give a sense of DejaVu perhaps even causing some to wonder if they just woke up from a long Rip Van Winkle type nap.

    Yet for another generation who may not yet be future IBM followers, fans, partners or customers, there could be a sense of something new and revolutionary with the PureFlex and PureApplication systems (twitter @ibmpuresystems).

    In between those two groups, exist others who are either scratching their heads or reinvigorated with enthusiasm to get out and be able to discuss opportunities around little data (traditional and transactional) and big data, servers, virtualized, converged infrastructure, dynamic data centers, private clouds, ITaaS, SaaS and AaaS, PaaS, IaaS and other related themes or buzzword bingo topics.

    Let us dig a little deeper and look at some So What types of questions and industry trends perspectives comments around what IBM has announced.

    So what did IBM announce?
    IBM announced PureSystems including:

    • PureFlex systems, products and technologies
    • PureApplication systems
    • PureSystems Centre

    You can think of IBM PureSystems and Flex Systems Products and technology as a:

    • Private cloud or turnkey solution bundle solution
    • Platform deploying public or hybrid clouds
    • Data center in a box or converged and dynamic system
    • ITaaS or SaaS/AaaS or PaaS or IaaS or Cloud in a box
    • Rackem stack and package them type solution

    So what is an IBM PureFlex System and what is IBM using?
    It is a factory integrated data and compute infrastructure in a cabinet combing cloud, virtualization, servers, data and storage networking capabilities. The IBM PureFlex system is comprised of various IBM and products and technologies (hardware, software and services) optimized with management across physical and virtual resources (servers, storage (V7000), networking, operating systems, hypervisors and tools).

    PureFlex includes automation and optimization technologies along with what IBM is referring to as patterns of expertise or what you might relate to as templates. Support for various hypervisors and management integration along with application and operating system support by leveraging IBM xSeries (x86 such as Intel) and pSeries (Power7) based processors for compute. Storage is the IBM V7000 (here and here) with networking and connectivity via IBM and their partners. The solution is capable of supporting traditional, virtual and cloud deployment models as well as platform for deploying Infrastructure as a Service (IaaS) on a public, managed service provider (MSP), hosting or private basis.

    Click here to view the next post in this series, ok nuff said for now.

    Here are some links to learn more:
    Various IBM Redbooks and related content
    The blame game: Does cloud storage result in data loss?
    What do you need when its time to buy a new server?
    2012 industry trends perspectives and commentary (predictions)
    Convergence: People, Processes, Policies and Products
    Buzzword Bingo and Acronym Update V2.011
    The function of XaaS(X) Pick a letter
    Hard product vs. soft product
    Buzzword Bingo and Acronym Update V2.011
    Part I: PureSystems, something old, something new, something from big blue
    Part II: PureSystems, something old, something new, something from big blue
    Part III: PureSystems, something old, something new, something from big blue
    Part IV: PureSystems, something old, something new, something from big blue
    Part V: PureSystems, something old, something new, something from big blue
    Cloud and Virtual Data Storage Networking

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Data Center I/O Bottlenecks Performance Issues and Impacts

    This is an excerpt blog version of the popular Server and StorageIO Group white paper "IT Data Center and Data Storage Bottlenecks" originally published August of 2006 that is as much if not more relevant today than it was in the past.

    Most Information Technology (IT) data centers have bottleneck areas that impact application performance and service delivery to IT customers and users. Possible bottleneck locations shown in Figure-1 include servers (application, web, file, email and database), networks, application software, and storage systems. For example users of IT services can encounter delays and lost productivity due to seasonal workload surges or Internet and other network bottlenecks. Network congestion or dropped packets resulting in wasteful and delayed retransmission of data can be the results of network component failure, poor configuration or lack of available low latency bandwidth.

    Server bottlenecks due to lack of CPU processing power, memory or under sized I/O interfaces can result in poor performance or in worse case scenarios application instability. Application including database systems bottlenecks due to excessive locking, poor query design, data contention and deadlock conditions result in poor user response time. Storage and I/O performance bottlenecks can occur at the host server due to lack of I/O interconnect bandwidth such as an overloaded PCI interconnect, storage device contention, and lack of available storage system I/O capacity.

    These performance bottlenecks, impact most applications and are not unique to the large enterprise or scientific high compute (HPC) environments. The direct impact of data center I/O performance issues include general slowing of the systems and applications, causing lost productivity time for users of IT services. Indirect impacts of data center I/O performance bottlenecks include additional management by IT staff to trouble shoot, analyze, re-configure and react to application delays and service disruptions.


    Figure-1: Data center performance bottleneck locations

    Data center performance bottleneck impacts (see Figure-1) include:

    • Under utilization of disk storage capacity to compensate for lack of I/O performance capability
    • Poor Quality of Service (QoS) causing Service Level Agreements (SLA) objectives to be missed
    • Premature infrastructure upgrades combined with increased management and operating costs
    • Inability to meet peak and seasonal workload demands resulting in lost business opportunity

    I/O bottleneck impacts
    It should come as no surprise that businesses continue to consume and rely upon larger amounts of disk storage. Disk storage and I/O performance fuel the hungry needs of applications in order to meet SLAs and QoS objectives. The Server and StorageIO Group sees that, even with efforts to reduce storage capacity or improve capacity utilization with information lifecycle management (ILM) and Infrastructure Resource Management (IRM) enabled infrastructures, applications leveraging rich content will continue to consume more storage capacity and require additional I/O performance. Similarly, at least for the next few of years, the current trend of making and keeping additional copies of data for regulatory compliance and business continue is expected to continue. These demands all add up to a need for more I/O performance capabilities to keep up with server processor performance improvements.


    Figure-2: Processing and I/O performance gap

    Server and I/O performance gap
    The continued need for accessing more storage capacity results in an alarming trend: the expanding gap between server processing power and available I/O performance of disk storage (Figure-2). This server to I/O performance gap has existed for several decades and continues to widen instead of improving. The net impact is that bottlenecks associated with the server to I/O performance lapse result in lost productivity for IT personal and customers who must wait for transactions, queries, and data access requests to be resolved.

    Application symptoms of I/O bottlenecks
    There are many applications across different industries that are sensitive to timely data access and impacted by common I/O performance bottlenecks. For example, as more users access a popular file, database table, or other stored data item, resource contention will increase. One way resource contention manifests itself is in the form of database “deadlock” which translates into slower response time and lost productivity. 

    Given the rise and popularity of internet search engines, search engine optimization (SEO) and on-line price shopping, some businesses have been forced to create expensive read-only copies of databases. These read-only copies are used to support more queries to address bottlenecks from impacting time sensitive transaction databases.

    In addition to increased application workload, IT operational procedures to manage and protect data help to contribute to performance bottlenecks. Data center operational procedures result in additional file I/O scans for virus checking, database purge and maintenance, data backup, classification, replication, data migration for maintenance and upgrades as well as data archiving. The net result is that essential data center management procedures contribute to performance challenges and impacting business productivity.

    Poor response time and increased latency
    Generally speaking, as additional activity or application workload including transactions or file accesses are performed, I/O bottlenecks result in increased response time or latency (shown in Figure-3). With most performance metrics more is better; however, in the case of response time or latency, less is better.  Figure-3 shows the impact as more work is performed (dotted curve) and resulting I/O bottlenecks have a negative impact by increasing response time (solid curve) above acceptable levels. The specific acceptable response time threshold will vary by applications and SLA requirements. The acceptable threshold level based on performance plans, testing, SLAs and other factors including experience serves as a guide line between acceptable and poor application performance.

    As more workload is added to a system with existing I/O issues, response time will correspondingly decrease as was seen in Figure-3. The more severe the bottleneck, the faster response time will deteriorate (e.g. increase) from acceptable levels. The elimination of bottlenecks enables more work to be performed while maintaining response time below acceptable service level threshold limits.


    Figure-3: I/O response time performance impact

    Seasonal and peak workload I/O bottlenecks
    Another common challenge and cause of I/O bottlenecks is seasonal and/or unplanned workload increases that result in application delays and frustrated customers. In Figure-4 a workload representing an eCommerce transaction based system is shown with seasonal spikes in activity (dotted curve). The resulting impact to response time (solid curve) is shown in relation to a threshold line of acceptable response time performance. For example, peaks due holiday shopping exchanges appear in January then dropping off increasing near mother’s day in May, then back to school shopping in August results in increased activity as does holiday shopping starting in late November.


    Figure-4: I/O bottleneck impact from surge workload activity

    Compensating for lack of performance
    Besides impacting user productivity due to poor performance, I/O bottlenecks can result in system instability or unplanned application downtime. One only needs to recall recent electric power grid outages that were due to instability, insufficient capacity bottlenecks as a result of increased peak user demand.

    I/O performance improvement approaches to address I/O bottlenecks have been to do nothing (incur and deal with the service disruptions) or over configure by throwing more hardware and software at the problem. To compensate for lack of I/O performance and counter the resulting negative impact to IT users, a common approach is to add more hardware to mask or move the problem.

    However, this often leads to extra storage capacity being added to make up for a short fall in I/O performance. By over configuring to support peak workloads and prevent loss of business revenue, excess storage capacity must be managed throughout the non-peak periods, adding to data center and management costs. The resulting ripple affect is that now more storage needs to be managed, including allocating storage network ports, configuring, tuning, and backing up of data. This can and does result in environments that have storage utilization well below 50% of their useful storage capacity. The solution is to address the problem rather than moving and hiding the bottleneck elsewhere (rather like sweeping dust under the rug).

    Business value of improved performance
    Putting a value on the performance of applications and their importance to your business is a necessary step in the process of deciding where and what to focus on for improvement. For example, what is the value of reducing application response time and the associated business benefit of allowing more transactions, reservations or sales to be made? Likewise, what is the value of improving the productivity of a designer or animator to meet tight deadlines and market schedules? What is business benefit of enabling a customer to search faster for and item, place an order, access media rich content, or in general improve their productivity?

    Server and I/O performance gap as a data center bottleneck
    I/O performance bottlenecks are a wide spread issue across most data centers, affecting many applications and industries. Applications impacted by data center I/O bottlenecks to be looked at in more depth are electronic design automation (EDA), entertainment and media, database online transaction processing (OLTP) and business intelligence. These application categories represent transactional processing, shared file access for collaborative work, and processing of shared, time sensitive data.

    Electronic design
    Computer aided design (CAD), computer assisted engineering (CAE), electronic design automaton (EDA) and other design tools are used for a wide variety of engineering and design functions. These design tools require fast access to shared, secured and protected data. The objective of using EDA and other tools is to enable faster product development with better quality and improved worker productivity. Electronic components manufactured for the commercial, consumer and specialized markets rely on design tools to speed the time-to-market of new products as well as to improve engineer productivity.

    EDA tools, including those from Cadence, Synopsis, Mentor Graphics and others, are used to develop expensive and time sensitive electronic chips, along with circuit boards and other components to meet market windows and suppler deadlines. An example of this is a chip vendor being able to simulate, develop, test, produce and deliver a new chip in time for manufacturers to release their new products based on those chips. Another example is aerospace and automotive engineering firms leveraging design tools, including CATIA and UGS, on a global basis relying on their suppler networks to do the same in a real-time, collaborative manner to improve productivity and time-to-market. These results in contention of shared file and data access and, as a work-around, more copies of data kept as local buffers.

    I/O performance impacts and challenges for EDA, CAE and CAD systems include:

    • Delays in drawing and file access resulting in lost productivity and project delays
    • Complex configurations to support computer farms (server grids) for I/O and storage performance
    • Proliferation of dedicated storage on individual servers and workstations to improve performance

    Entertainment and media
    While some applications are characterized by high bandwidth or throughput, such as streaming video and digital intermediate (DI) processing of 2K (2048 pixels per line) and 4K (4096 pixels per line) video and film, there are many other applications that are also impacted by I/O performance time delays. Even bandwidth intensive applications for video production and other applications are time sensitive and vulnerable to I/O bottleneck delays. For example, cell phone ring tone, instant messaging, small MP3 audio, and voice- and e-mail are impacted by congestion and resource contention.

    Prepress production and publishing requiring assimilation of many small documents, files and images while undergoing revisions can also suffer. News and information websites need to look up breaking stories, entertainment sites need to view and download popular music, along with still images and other rich content; all of this can be negatively impacted by even small bottlenecks.  Even with streaming video and audio, access to those objects requires accessing some form of a high speed index to locate where the data files are stored for retrieval. These indexes or databases can become bottlenecks preventing high performance storage and I/O systems from being fully leveraged.

    Index files and databases must be searched to determine the location where images and objects, including streaming media, are stored. Consequently, these indices can become points of contention resulting in bottlenecks that delay processing of streaming media objects. When cell phone picture is taken phone and sent to someone, chances are that the resulting image will be stored on network attached storage (NAS) as a file with a corresponding index entry in a database at some service provider location. Think about what happens to those servers and storage systems when several people all send photos at the same time.

    I/O performance impacts and challenges for entertainment and media systems include:

    • Delays in image and file access resulting in lost productivity
    • Redundant files and storage local servers to improve performance
    • Contention for resources causing further bottlenecks during peak workload surges

    OLTP and business intelligence
    Surges in peak workloads result in performance bottlenecks on database and file servers, impacting time sensitive OLTP systems unless they are over configured for peak demand. For example, workload spikes due to holiday and back-to-school shopping, spring break and summer vacation travel reservations, Valentines or Mothers Day gift shopping, and clearance and settlement on peak stock market trading days strain fragile systems. For database systems maintaining performance for key objects, including transaction logs and journals, it is important to eliminate performance issues as well as maintain transaction and data integrity.

    An example tied to eCommerce is business intelligence systems (not to be confused with back office marketing and analytics systems for research). Online business intelligence systems are popular with online shopping and services vendors who track customer interests and previous purchases to tailor search results, views and make suggestions to influence shopping habits.

    Business intelligence systems need to be fast and support rapid lookup of history and other information to provide purchase histories and offer timely suggestions. The relative performance improvements of processors shift the application bottlenecks from the server to the storage access network. These applications have, in some cases, resulted in an exponential increase in query or read operations beyond the capabilities of single database and storage instances, resulting in database deadlock and performance problems or the proliferation of multiple data copies and dedicated storage on application servers.

    A more recent contribution to performance challenges, caused by the increased availability of on-line shopping and price shopping search tools, is low cost craze (LCC) or price shopping. LCC has created a dramatic increase in the number of read or search queries taking place, further impacting database and file systems performance. For example, an airline reservation system that supports price shopping while preventing impact to time sensitive transactional reservation systems would create multiple read-only copies of reservations databases for searches. The result is that more copies of data must be maintained across more servers and storage systems thus increasing costs and complexity. While expensive, the alternative of doing nothing results in lost business and market share.

    I/O performance impacts and challenges for OLTP and business intelligence systems include:

    • Application and database contention, including deadlock conditions, due to slow transactions
    • Disruption to application servers to install special monitoring, load balance or I/O driver software
    • Increased management time required to support additional storage needed as a I/O workaround

    Summary/Conclusion
    It is vital to understand the value of performance, including response time or latency, and numbers of I/O operations for each environment and particular application. While the cost per raw TByte may seem relatively in-expensive, the cost for I/O response time performance also needs to be effectively addressed and put into the proper context as part of the data center QoS cost structure.

    There are many approaches to address data center I/O performance bottlenecks with most centered on adding more hardware or addressing bandwidth or throughput issues. Time sensitive applications depend on low response time as workload including throughput increase and thus latency can not be ignored. The key to removing data center I/O bottlenecks is to find and address the problem instead of simply moving or hiding it with more hardware and/or software. Simply adding fast devices such as SSD may provide relief, however if the SSDs are attached to high latency storage controllers, the full benefit may not be realized. Thus, identify and gain insight into data center and I/O bottleneck paths eliminating issues and problems to boost productivity and efficiency.

    Where to Learn More
    Additional information about IT data center, server, storage as well as I/O networking bottlenecks along with solutions can be found at the Server and StorageIO website in the tips, tools and white papers, as well as news, books, and activity on the events pages. If you are in the New York area on September 23, 2009, check out my presentation on The Other Green – Storage Optimization and Efficiency that will touch on the above and other related topics. Download your copy of "IT Data Center and Storage Bottlenecks" by clicking here.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Green Power and Cooling Tools and Calculators

    In the course of doing research and consulting work with various IT organizations, VARs, trade groups and vendors over the past couple of years, not to mention in preparing for my new book "The Green and Virtual Data Center" (Auerbach), I have come across (see a list here) several tools, calculators and modeling or sizing utilities pertaining to power, cooling, floor-space, EH&S (PCFE) also known as green topics.

    Many vendors and organizations including APC, Dell, EMC, Emerson, IBM, HP and Sun among others have various types of green and related calculators in support of PCFE, performance and related sizing activities. These and other tools differ in what information they provide as well as the level of detail and configuration information, however, the tools are also evolving. As an example, EMC blogger Mark Twomey aka Storagezilla has a new post discussing an updated version of their updated calculator that is now web based tool for PCFE and green sizing.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Storage Optimization: Performance, Availability, Capacity, Effectiveness

    Storage I/O trends

    With the IT and storage industry shying away from green hype, green washing and other green noise, there is also a growing realization that the new green is about effectively boosting efficiency to improve productivity and profitability or to sustain business and IT growth during tough economic times.

    This past week while doing some presentations (I’ll post a link soon to the downloads) at the 2008 San Francisco installment of Storage Decisions event focused on storage professionals, as well as a keynote talk at the value added reseller (VAR) channel professional focused storage strategies event, a common theme was boosting productivity, improving on efficiency, stretching budgets and enabling existing personal and resources to do more with the same or less.

    During these and other presentations, keynotes, sessions and seminars both here in the U.S. as well as in Europe recently, these common themes of booting efficiency as well as the closing of the green gap, that is, the gap between industry and marketing rhetoric around green hype, green noise, green washing and issues that either do not resonate with, or, can not be funded by IT organizations compared with the disconnect of where many IT organizations issues exist which are around power, cooling, floor space or footprint as well as EH&S (Environmental health and safety) and economics.

    The green gap (here, and here, and here) is that many IT organizations around the world have not realized due to green hype around carbon footprints and related themes that in fact, boosting energy efficiency for active and on-line applications, data and workloads (e.g. doing more I/O operations per second-IOPS, transactions, files or messages processed per watt of energy) to address power, cooling, floor space are in fact a form of addressing green issues, both economic and environmental.

    Likewise for inactive or idle data, there is a bit more of a linkage that green can mean powering things off, however there is also a disconnect in that many perceive that green storage for example is only green if the storage can be powered off which while true for in-active or idle data and applications, is not true for all data and applications types.

    As mentioned already, for active workloads, green means doing more with the same or less power, cooling and floor space impact, this means doing more work per unit of energy. In that theme, for active workload, a slow, large capacity disk may in fact not be energy efficient if it impedes productivity and results in more energy to get the same amount of work done. For example, larger capacity SATA disk drives are also positioned as being the most green or energy efficiency which can be true for idle or in-active or non performance (time) sensitive applications where more data is stored in a denser footprint.

    However for active workload, lower capacity 15.5K RPM 300GB and 400GB Fibre Channel (FC) and SAS disk drives that deliver more IOPS or bandwidth per watt of energy can get more work done in the same amount of time.

    There is also a perception that FC and SAS disk drives use more power than SATA disk drives which in some cases can be true, however current generations of high performance 10K RPM and 15.5K RPM drives have very similar power draw on a raw spindle or device basis. What differs is the amount of capacity per watt for idle or inactive applications, or, the number of IOPS or amount of performance for active configurations.

    On the other hand, while not normally perceived as being green compared to tape or IPM and MAID (1st generation and MAID 2.0) solutions, along with SSD (Flash and RAM), not to mention fast SAS and FC disks or tiered storage systems that can do more IOPS or bandwidth per watt of energy are in fact green and energy efficiency for getting work done. Thus, there are two sides to optimizing storage for energy efficiency, optimizing for when doing work e.g. more miles per gallon per amount of work done, and, how little energy used when not doing work.

    Thus, a new form of being green to sustain business growth while boosting productivity is Gaining Realistic Economic Efficiency Now that as a by product helps both business bottom lines as well as the environment by doing more with less. These are themes that are addressed in my new book

    “The Green and Virtual Data Center” (Auerbach) that will be formerly launched and released for generally availability just after the 1st of the year (hopefully sooner), however you can beat the rush and order your copy now to beat the rush at Amazon and other fine venues around the world.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    From ILM to IIM, Is this a solution sell looking for a problem?

    Storage I/O trends

    Enterprise Storage Forum has a new piece about what could be the successor to ILM from a marketing rallying cry perspective in the form of Intelligent Information Management (IIM).

    Information management is an important topic, however, given tough economic times, can IIM be joined into some other discussions about efficiency and boosting productivity to help justify its cost what ever that cost may be in terms of more hardware, software and people to carry out? With EMC and Gartner banging the drum, it will be interesting to see who else jumps on the IIM bandwagon.

    On the other hand, lets see what over variations surface perhaps an VIIM (Virtualized IIM), or a IIMaaS (IIM as a Service), or how about Cloud IIM or GIIM (Green IIM) among others like xIIM where you plug what ever letter you want in front if IIM (something that someone missed out on a few years ago by not grabbing xLM).

    While I see the importance of data management, the bottom line is going to be how to budget and build a business case when sustaining business growth in tough economic times is a common theme. Hopefully we can see some business case and justifications that can involve some self-funded, that is, the cost of adopting and deploying IIM is covered by the savings in associated hardware and software management and maintenance fees as well as a means of boosting overall IT and data management productivity.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved