Some times simplicity and flexibility without complexity (and cost) are the enablers for innovation and productivity.
In this episode from NAB 2013 in Las Vegas (more on that in a future post), I meet up with the Padcaster (@ThePadcaster) creator Josh Apter (@PJmakemovies).
The Padcaster (both the name of the company and product) is a mounting bracket for iPads (among other things) that enables you to safely attach lights, lenses, microphones, tripods among other things to create a production studio.
Enjoy this episode from NAB 2013 with Josh Apter and the Padcaster and check out their website www.thepadcaster.com. See if they will give you the NAB show special price, tell them Greg from StorageIO sent you.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Riding the current software defined data center (SDC) wave being led by the likes of VMware and software defined networking (SDN) also championed by VMware via their acquisition of Nicira last year, Software Defined Marketing (SDM) is in full force. HP being a player in providing the core building blocks for traditional little data and big data, along with physical, virtual, converged, cloud and software defined has announced a new compute, processor or server platform called the Moonshot 1500.
Software defined marketing aside, there are some real and interesting things from a technology standpoint that HP is doing with the Moonshot 1500 along with other vendors who are offering micro server based solutions.
First, for those who see server (processor and compute) improvements as being more and faster cores (and threads) per socket, along with extra memory, not to mention 10GbE or 40GbE networking and PCIe expansion or IO connectivity, hang on to your hats.
Moonshot is in the model of the micro servers or micro blades such as what HP has offered in the past along with the likes of Dell and Sea Micro (now part of AMD). The micro servers are almost the opposite of the configuration found on regular servers or blades where the focus is putting more ability on a motherboard or blade.
With micro servers the approach support those applications and environments that do not need lots of CPU processing capability, large amount of storage or IO or memory. These include some web hosting or cloud application environments that can leverage more smaller, lower power, less performance or resource intensive platforms. For example big data (or little data) applications whose software or tools benefit from many low-cost, low power, and lower performance with distributed, clustered, grid, RAIN or ring based architectures can benefit from this type of solution.
What is the Moonshot 1500 system?
4.3U high rack mount chassis that holds up to 45 micro servers
Each hot-swap micro server is its own self-contained module similar to blade server
Server modules install vertically from the top into the chassis similar to some high-density storage enclosures
Compute or processors are Intel Atom S1260 2.0GHz based processors with 1 MB of cache memory
Single S0-DIMM slot (unbuffered ECC at 1333 MHz) supports 8GB (1 x 8GB DIMM) DRAM
Each server module has a single 2.5″ SATA 200GB SSD, 500GB or 1TB HDD onboard
A dual port Broadcom 5720 1 Gb Ethernet LAn per server module that connects to chassis switches
Marvel 9125 storage controller integrated onboard each server module
Chassis and enclosure management along with ACPI 2.0b, SMBIOS 2.6.1 and PXE support
A pair of Ethernet switches each give up to six x 10GbE uplinks for the Moonshot chassis
Dual RJ-45 connectors for iLO chassis management are also included
Status LEDs on the front of each chassis providers status of the servers and network switches
Support for Canonical Ubuntu 12.04, RHEL 6.4, SUSE Linux LES 11 SP2
Notice a common theme with moonshot along with other micro server-based systems and architectures?
If not, it is simple, I mean literally simple and flexible is the value proposition.
Simple is the theme (with software defined for marketing) along with low-cost, lower energy power demand, lower performance, less of what is not needed to remove cost.
Granted not all applications will be a good fit for micro servers (excuse me, software defined servers) as some will need the more robust resources of traditional servers. With solutions such as HP Moonshot, system architects and designers have more options available to them as to what resources or solution options to use. For example, a cloud or object storage system based solutions that does not need a lot of processing performance per node or memory, and a low amount of storage per node might find this as an interesting option for mid to entry-level needs.
Will HP release a version of their Lefthand or IBRIX (both since renamed) based storage management software on these systems for some market or application needs?
How about deploying NoSQL type tools including Cassandra or Mongo, how about CloudStack, OpenStack Swift, Basho Riak (or Riak CS) or other software including object storage, on these types of solutions, or web servers and other applications that do not need the fastest processors or most memory per node?
Thus micro server-based solutions such as Moonshot enable return on innovation (the new ROI) by enabling customers to leverage the right tool (e.g. hard product) to create their soft product allowing their users or customers to in turn innovate in a cost-effective way.
Will the Moonshot servers be the software defined turnaround for HP, click here to see what Bloomberg has to say, or Forbes here.
Learn more about Moonshot servers at HP here, here or data sheets found here.
Btw, HP claims that this is the industries first software defined server, hmm.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
In this episode from SNW Spring 2013 in Orlando Florida, while Greg is in the process of boarding a flight home, Bruce Ravid (@BruceRave) catches up and talks with long time storage industry insider (and outsider) Marc Farley. Marc flew into SNW for a few days (or hours) to catch up with customers, partners, peers and others. For those who may not know, Marc is now with Microsoft (they bought StorSimple last fall, check out this conversation over at Speaking in Tech where Marc and me were guests) and before that HP (they bought 3PAR) and before that Dell (they bought EqualLogic) among others. Bruce and Marc talk about basketball, storage, industry trends among other things.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
In this episode from SNW Spring 2013 in Orlando Florida, while Greg is in the process of boarding a flight home, Bruce Ravid (@BruceRave) catches up and talks with long time storage industry insider Tony DiCenzo of SNIA and Oracle. Their conversation covers industry trends, observations of SNW past and present along with other related topics.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Wayne gives us an update on what’s new with SNIA including education, tutorials, videos and other training material, along with standards such as SMIS among other items. Also check out the companion pod cast where Wayne is joined by SW Worth of SNIA education to discuss their new SNIA SPDEcon conference that will occur June 10th in Santa Clara California. Listen to the SPDEcon overview pod cast discussion here.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
In this episode from SNW Spring 2013 in Orlando Florida, Bruce Ravid (@BruceRave) and me visit with Justin Stottlemyer (@JHStott) who is a Fellow and Storage Architect at Shutterfly.
Our conversation centers on how Justin and Shutterfly maximize their return on innovation (the new ROI) by using object storage along with other technology and techniques to create a resilient, scalable flexible data infrastructure.
Justin was at SNW presenting on overcoming object integration at Shutterfly where their data infrastructure consists of 80PB of storage to house over 30PB of user content data that continues to grow.
For those not familiar, Shutterfly providers customers with free unlimited storage of their photos which can then be printed in coffee table type books such as the one shown in the above figure. My wife has used Shutterfly a few times to create photo books such as the one shown above in the image.
As you will hear Justin explain in the pod cast, photos get uploaded and ingested into their environment and then available for printing.
In addition to talking about object storage, private clouds, business continuance (BC) and disaster recovery, other topics include performance and capacity planning, maximizing return on innovation in addition to return on investment among other items. Varies and managed by user interface
Listen in to hear how Justin and Shutterfly are currently managing 80PB of storage with over 30PB of user data that continues to grow.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
In 2012 AWS released their Storage Gateway that you can use and try for free here using either an EC2 Amazon Machine Instance (AMI), or deployed locally on a hypervisor such as VMware vSphere/ESXi. About a year ago I did a storage gateway post (First, second and third impressions) when it was first released. I will do a new post soon following up with my later impressions and experiences of having used it recently. For now, my quick (fourth impressions can be found here in this AWS Marketplace review). In general, the gateway is an AWS alternative to using third product gateway, appliances of software tools for accessing AWS storage.
When deployed locally on a VM, the storage gateway communicates using the AWS API’s back to the S3 and EBS (depending on how configured) storage services. Locally, the storage gateway presents an iSCSI block access method for Windows or other servers to use.
There are two modes with one being Gateway-Stored and the other Gateway-Cached. Gateway-Stored uses your primary storage mapped to the storage gateway as primary storage and asynchronous (time delayed) snapshots (user defined) to S3 via EBS volumes. This is a handy way to have local storage for low latency access, yet use AWS for HA, BC and DR, along with a means for doing migration into or out of AWS. Gateway-cache mode places primary storage in AWS S3 with a local cached copy to reduce network overhead.
When I tried the gateway a month or so ago, using both modes, I was not able to view any of my data using standard S3 tools. For example if I looked in my S3 buckets the objects do not appear, something that AWS said had to do with where and how those buckets and objects are managed. Otoh, I was able to see EBS snapshots for the gateway-stored mode including using that as a means of moving data between local and AWS EC2 instances. Note that regardless of the AWS storage gateway mode, some local cache storage is needed, and likewise some EBS volumes will be needed depending on what mode is used.
When I used the gateway, a Windows Server mounted the iSCSI volume presented by the storage gateway and in turn served that to other systems as a shared folder. Thus while having block such as iSCSI is nice, a NAS (NFS or CIFS) presentation and access mode would also be useful. However more on the storage gateway in a future post. Also note that beyond the free trial period (you may have to pay for storage being used) for using the gateway, there are also fees for S3 and EBS storage volumes use.
What about Glacier?
Shortly after its release last year, I did this piece about Glacier and have since been doing some testing proof of concepts with it.
I like Glacier and its prospects for doing some various things, particular for inactive data including deep archives that will seldom if every be accessed, yet need to be retained. The business value proposition of Glacier is that it has a very high durability and low-cost assuming that you do not need to frequently access your data, and when you do, that you can wait 3 to 5 hours before retrieving it from your S3 buckets.
Access to Glacier is via API or AWS console so getting things into and out of it can be a challenge. For example I wanted to see if I could use AWS storage gateway to more easily bulk move things into Glacier via S3, however no luck, or at least today. Speaking of S3, by setting your policies you determine when objects get moved into Glacier as well as how long they will stay there, you can read more about Glacier here and via AWS here.
Note that there is a myth that cloud vendors have hidden fees which may be the case for some, however so far I have not seen that to be the case with AWS. However, as a consumer, designer or architect, doing your homework and looking at the above links among others you can be ready and understand the various fees and options. Hence like procuring traditional hardware, software or services, do your due diligence and be an informed shopper.
Some more service cost notes include:
Note that with S3 Standard and RRS objects there is not a charge for deletion of objects, however there is a pro-rated charge per GByte of Glacier objects removed prior to 90 days. Glacier also allows up to 5% of your average monthly storage usage (pro-rated daily) to be restored with no charge, other fees apply for restoring larger amounts in a given period. Thus if you are planning on accessing and using data, analyze what your activity and usage will be as part of calculating your costs with Glacier. Read more about Glacier here.
Standard EBS volumes are changed by the amount of storage space capacity you provision in GB until released. For EBS snapshot copies there are fees for transferring data across regions, once moved, the rates of the new region apply for the snapshot.
As with Standard volumes, volume storage for Provisioned IOPS volumes is charged by the amount you provision in GB per month. With Provisioned IOPS volumes, you are also charged by the amount you provision in IOPS pro-rated as a percentage of days you have it in use for the month.
Thus important for cloud storage planning to know not only your space requirements, also IOP’s, bandwidth, and level of availability as well as durability. so for Standard volumes, you will likely see a lower number of I/O requests on your bill than is seen by your application unless you sync all of your I/Os to disk. Thus pay attention to what your needs are in terms of availability (accessibility), durability (resiliency or survivability), space capacity, and performance.
Leverage AWS CloudWatch tools and API’s to monitoring that matter for timely insight and situational awareness into how EBS, EC2, S3, Glacier, Storage Gateway and other services are being used (or costing you). Also visit the AWS service health status dashboard to gain insight into how things are running to help gain confidence with cloud services and solutions.
Hopefully this helps to fill in some gaps giving more information addressing questions, along with generating new ones to prepare for your journey with clouds. After all, don’t be scared of clouds. Be prepared, do your homework, identify your concerns and then address those to gain cloud confidence.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
For those not familiar, Simple Storage Services (S3), Glacier and Elastic Block Storage (EBS) are part of the AWS cloud storage portfolio of services. There are several other storage and data related service for little data database (SQL and NoSql based) other offerings include compute, data management, application and networking for different needs shown in the following image.
S3 is well suited for both big and little data repositories of objects ranging from backup to archive to active video images and much more. In fact if you are using some of the different AaaS or SaaS services including backup or file and video sharing, those may be using S3 as its back-end storage repository. For example NetFlix leverages various AWS capabilities as part of its data and applications infrastructure (read more here).
AWS basics
AWS consists of multiple regions that contain multiple availability zones where data and applications are supported from.
Note that objects stored in a region never leave that region, such as data stored in the EU west never leave Ireland, or data in the US East never leaves Virginia.
AWS does support the ability for user controlled movement of data between regions for business continuance (BC), high availability (HA) and disaster recovery (DR). Read more here at the AWS Security and Compliance site and in this AWS white paper.
What about EBS?
That brings us to Elastic Block Storage (EBS) that is used by EC2 (read more about EC2 and instances here) as storage for cloud and virtual machines or compute instances. In addition to using S3 as a persistent backing store or target for holding snapshots EBS can be thought of as primary storage. You can provision and allocate EBS volumes in the different data centers of the various AWS availability zones. As part of allocating your EBS volume you indicate the type (standard) or provisioned IOP’s or the new EBS Optimized volumes. EBS Optimized volumes enables instances that support the feature to have better IO performance to storage.
The following image shows an EC2 instance with EBS volumes (standard and provisioned IOPS’s) along with S3 volumes and snapshots. In the following example the instance and volumes are being served via the AWS US East region (Northern Virginia) using availability zone US East 1a. In addition, EBS optimized volumes are shown being used in the example to increase bandwidth or throughput performance between storage and the compute instance.
Using the above as a basis, you can build on that to leverage multiple availability zones or regions for HA, BC and DR combined with application, network load balancing and other capabilities. Note that EBS volumes are protected for durability by being spread across different servers and storage in an availability zone. Additional protection is provided by using snapshots combined with S3. Additional BC and DR or HA protection can be accomplished by replicating data across availability zones.
The above is an example of tying various components and services together. For example using different AWS availability zones, instances, EBS, S3 and other tools including those from third parties. Here is a link to a free chapter download from Cloud and Virtual Data Storage Networking (CRC Press) pertaining to data protection, BC and DR (available at Amazon here and Kindle here). In addition here is an AWS white paper on using their services for BC, HA and DR.
EBS volumes are created ranging in size from 1GByte to 1Tbyte in space capacity with multiple volumes being mapped or attached to an EC2 instances. EBS volumes appear as a virtual disk drive for block storage. From the EC2 instance and guest operating system you can mount, format and use the EBS volumes as any other block disk drive with your favorite tools and file systems. In addition to space capacity, EBS volumes are also provisioned with standard IO (e.g. disk based) performance or high performance Provisioned IOPS (e.g. SSD) for thousands of IOPS per instance. AWS states that a standard EBS volume should support about 100 IOP’s on average, with about 2,000 IOPS for a provisioned IOP volume. Need more than 2,000 IOPS, then the AWS recommendation is to use multiple IOP provisioned volumes with data spread across those. Following is an example of AWS EBS volumes seen via the EC2 management interface.
AWS EC2 and EBS configuration status
Note that there is a 10 to 1 ratio of space capacity to IOP’s being provisioned. If you try to play a game of 1,000 IOPS provisioned on a 10GByte EBS volume to keep your costs down you are out of luck. Thus to get 1,000 IOPS’s you would need to allocate at least a 100GByte EBS volume of which you will be billed for the actual space used on a monthly pro-rated basis. The following is an example of provisioning an AWS EBS volume using provisioned IOPS in the US East region in the 1a availability zone.
Provisioning IOPS with EBS volume
Standard and Provisioned IOPS EBS volumes
Standard EBS volumes are good for boot images or other application usage that are not IO performance intensive. For database or other active applications where more performance is needed, then EBS Provisioned IOPS volumes are your option. Note that the provisioned IOP rate is persistent for the specific volume during its life. Thus if you set it and forget it including not using it without turning it off, you will be billed for provisioning it.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
The four EBS optimized instance types are m3.xlarge, m3.2xlarge, m2.2xlarge and c1.xlarge for dedicated bandwidth or throughput between the EC2 instances and EBS volumes. The performance or bandwidth ranges from 500 Mbits (500 / 8 = 62.5 MBytes) per second, to 1,000 Mbits (1,000 / 8 = 125MBytes) per second depending on the type of instance. As a refresher, EC2 instances (why by time you read this could change) vary in size and functionality with different amounts of EC2 Unit of Compute (ECU), number of virtual cores, amount of storage space included, 32 or 64 bit, storage and networking IO performance, and EBS Optimized or not. In addition to instances, different operating system images can be installed using those licensed from AWS such as various Windows and Unix or supply your own.
There are also different generations of instances such as M1 (first generation where one ECU = 1.0 to 1.2 Ghz of a 2007 era Opteron or Xeon processor), M3 (second generation with faster processors) along with Micro low-cost options. There are also other optimized instances including high or large amounts of memory, high CPU or compute processing, clustered compute, high memory clustered, clustered GPU (e.g. using Nivida Tesla GPUs), high IO and high storage space capacity needs.
Here is the announcement from AWS:
Dear Amazon Web Services Customer,
We are delighted to announce the global availability of EBS-optimized support for four additional instance types: m3.xlarge, m3.2xlarge, m2.2xlarge, and c1.xlarge. EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 Megabits per second and 1,000 Megabits per second depending on the instance type used. The dedicated throughput minimizes contention between EBS I/O and other traffic from your Amazon EC2 instance, providing the best performance for your EBS volumes.
EBS-optimized instances are designed for use with both Standard and Provisioned IOPS EBS volumes. Standard volumes deliver 100 IOPS on average with a best effort ability to burst to hundreds of IOPS, making them well-suited for workloads with moderate and bursty I/O needs. When attached to an EBS-optimized instance, Provisioned IOPS volumes are designed to consistently deliver up to 2000 IOPS from a single volume, making them ideal for I/O intensive workloads such as databases. You can attach multiple Amazon EBS volumes to a single instance and stripe your data across them for increased I/O and throughput performance.
Amazon EBS-optimized support is now available for m3.xlarge, m3.2xlarge, m2.2xlarge, m2.4xlarge, m1.large, m1.xlarge, and c1.xlarge instance types, and is currently supported in the US-East (N. Virginia), US-West (N. California), US-West (Oregon), EU-West (Ireland), Asia Pacific (Singapore), Asia Pacific (Japan), Asia Pacific (Sydney), and South America (São Paulo) Regions.
What this means is that AWS is enabling customers to size their compute instances and storage volumes with more flexibility to meet different needs. For example, EC2 instances with various compute processing capabilities, amount of memory, network and storage I/O performance to volumes. In addition, storage volumes based on different space capacity size, standard or provisioned IOP’s, bandwidth or throughput performance between the instance and volume, along with data protection such as snapshots.
This means that the cost per space capacity of an EBS volume varies based on which AWS availability zone it is in, standard (lower IOP performance) or provisioned IOP’s (faster), along with instance type. In other words, cloud storage is not just about the cost per GByte, it’s also about the cost for IOPS, bandwidth to use it, where it is located (e.g. with AWS which Availability Zone), type of service, level of availability and durability among other attributes.
Additional reading and related items:
Cloud conversations: AWS EBS, Glacier and S3 overview (Part I)
Cloud conversations: AWS EBS, Glacier and S3 overview (Part II)
Cloud conversations: AWS EBS, Glacier and S3 overview (Part III)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Cloud Bulk Big Data Software Defined Object Storage Resources
Welcome to the Cloud, Big Data, Software Defined, Bulk and Object Storage Resources Center Page objectstoragecenter.com.
This object storage resources, along with software defined, cloud, bulk, and scale-out storage page is part of the server StorageIOblog microsite collection of resources. Software-defined, Bulk, Cloud and Object Storage exist to support expanding and diverse application data demands.
Bulk, Cloud, Object Storage Solutions and Services
There are various types of cloud, bulk, and object storage including public services such as Amazon Web Services (AWS) Simple Storage Service (S3), Backblaze, Google, Microsoft Azure, IBM Softlayer, Rackspace among many others. There are also solutions for hybrid and private deployment from Cisco, Cloudian, CTERA, Cray, DDN, Dell EMC, Elastifile, Fujitsu, Vantera/HDS, HPE, Hedvig, Huawei, IBM, NetApp, Noobaa, OpenIO, OpenStack, Quantum, Rackspace, Rozo, Scality, Spectra, Storpool, StorageCraft, Suse, Swift, Virtuozzo, WekaIO, WD, among many others.
Cloud products and services among others, along with associated data infrastructures including object storage, file systems, repositories and access methods are at the center of bulk, big data, big bandwidth and little data initiatives on a public, private, hybrid and community basis. After all, not everything is the same in cloud, virtual and traditional data centers or information factories from active data to in-active deep digital archiving.
Object Context Matters
Before discussing Object Storage lets take a step back and look at some context that can clarify some confusion around the term object. The word object has many different meanings and context, both inside of the IT world as well as outside. Context matters with the term object such as a verb being a thing that can be seen or touched as well as a person or thing of action or feeling directed towards.
Besides a person, place or physical thing, an object can be a software-defined data structure that describes something. For example, a database record describing somebody’s contact or banking information, or a file descriptor with name, index ID, date and time stamps, permissions and access control lists along with other attributes or metadata. Another example is an object or blob stored in a cloud or object storage system repository, as well as an item in a hypervisor, operating system, container image or other application.
Besides being a verb, an object can also be a noun such as disapproval or disagreement with something or someone. From an IT context perspective, an object can also refer to a programming method (e.g. object-oriented programming [oop], or Java [among other environments] objects and classes) and systems development in addition to describing entities with data structures.
In other words, a data structure describes an object that can be a simple variable, constant, complex descriptor of something being processed by a program, as well as a function or unit of work. There are also objects unique or with context to specific environments besides Java or databases, operating systems, hypervisors, file systems, cloud and other things.
The Need For Bulk, Cloud and Object Storage
There is no such thing as an information recession with more data being generated, moved, processed, stored, preserved and served, granted there are economic realities. Likewise as a society our dependence on information being available for work or entertainment, from medical healthcare to social media and all points in between continues to increase (check out the Human Face of Big Data).
Object and cloud storage are in your future, the questions are when, where, with what and how among others.
Watch for more content and links to be added here soon to this object storage center page including posts, presentations, pod casts, polls, perspectives along with services and product solutions profiles.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
If your organization like StorageIO is a member of the Open Data Center Alliance (ODCA) you may be aware of the resources they make available about cloud, virtualization, security and more. Unlike so many other industry associates or trade groups dominated by vendors, the ODCA has an IT or customer focus including member developed best practices, strategies and templates.
A good example is the recently released ODCA member BMW group private cloud strategy document.
This 24 page document covers BMW groups private cloud strategy that sets stage for phased future hybrid. By being a phased approach, it seems that BMW is leveraging and transitioning for the future while maintaining support for their current environment (including Windows-based) as part of a paradigm shift. This is refreshing and good to see how organizations are looking to use cloud as part of a paradigm or IT service deliver model and not just as a new technology or platform focus.
Topics covered include IaaS along with PaaS for DB, Web, SAP and CSaaS or Corporate Software as a Service based on the NIST cloud model. Also included are roles and integration of CMDB, ITSM, ITIL, orchestration in a business vs. technology driven model. Being business driven, that means there is a mission statement for the BMW cloud strategy, with objectives aligned to support organization enablement vs. using different tools, technologies or trends along with design criteria.
What I like about the BMW strategy is that it is aligned to support the business as opposed to finding ways to use technology to support the business, or justify why a cloud is needed. In other words, something different from those needing for a technology, tool, product, standard or service to be adopted.
Thus while having been a vendor, the ODCA customer focused angle appeals to me from when I was on that side of the table working in IT organizations. Otoh, for some of you reading through the BMW document might result in DejaVu from experiences of web-based, client-server, information utilities and other IT service delivery models or paradigms.
Learn more at the ODCA newsroom
If you have not done, check out and join the ODCA.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Fast forward to today, has anybody else noticed that there seems to be less hype and fud on Fibre Channel (FC) over Ethernet (FCoE) than a year or two or three ago?
Does this mean that FCoE as the fud or detractors were predicting is in fact stillborn with no adoption, no deployment and dead on arrival?
Does this mean that FCoE as its proponents have said is still maturing, quietly finding adoption and deployment where it fits?
Does this mean that FCoE like its predecessors Fibre Channel and Ethernet are still evolving, expanding from early adopter to a mature technology?
Does this mean that FCoE is simply forgotten with software defined networking (SDN) having over-shadowed it?
Does this mean that FCoE has finally lost out and that iSCSI has finally stepped up and living up to what it was hyped to do ten years ago?
Does this mean that FC itself at either 8GFC or 16GFC is holding its own for now?
Does this mean that InfiniBand is on the rebound?
Does this mean that FCoE is simply not fun or interesting, or a shiny new technology with vendors not spending marketing money so thus people not talking, tweeting or blogging?
Does this mean that those who were either proponents pitching it or detractors despising it have found other things to talk about from SDN to OpenFlow to IOV to Software Defined Storage (what ever, or who ever definition your subscribe to) to cloud, big or little data and the list goes on?
I continue hear of or talk with customers organizations deploying FCoE in addition to iSCSI, FC, NAS and other means of accessing storage for cloud, virtual and physical environments.
Likewise I see some vendor discussions occurring not to mention what gets picked up via google alerts.
However in general, the rhetoric both pro and against, hype and FUD seems to have subsided, or at least for now.
So what gives, what’s your take on FCoE hype and FUD?
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved