S3motion Buckets Containers Objects AWS S3 Cloud and EMCcode
It’s springtime in Kentucky and recently I had the opportunity to have a conversation with Kendrick Coleman to talk about S3motion, Buckets, Containers, Objects, AWS S3, Cloud and Object Storage, node.js, EMCcode and open source among other related topics which are available in a podcast here, or video here and available at StorageIO.tv.
In this Server StorageIO industry trends perspective podcast episode, @EMCcode (Part of EMC) developer advocate Kendrick Coleman (@KendrickColeman) joins me for a conversation. Our conversation spans spring-time in Kentucky (where Kendrick lives) which means Bourbon and horse racing as well as his blog (www.kendrickcoleman.com).
Btw, in the podcast I refer to Captain Obvious and Kendrick’s beard, for those not familiar with who or what @Captainobvious is that is made reference to, click here to learn more.
What about Clouds Object Storage Programming and other technical stuff?
Of course we also talk some tech including what is EMCcode, EMC Federation, Cloud Foundry, clouds, object storage, buckets, containers, objects, node.js, Docker, Openstack, AWS S3, micro services, and the S3motion tool that Kendrick developed.
Kendrick explains the motivation behind S3motion along with trends in and around objects (including GET, PUT vs. traditional Read, Write) as well as programming among related topic themes and how context matters.
I have used S3motion for moving buckets, containers and objects around including between AWS S3, Google Cloud Storage (GCS) and Microsoft Azure as well as to/from local. S3motion is a good tool to have in your server storage I/O tool box for working with cloud and object storage along with others such as Cloudberry, S3fs, Cyberduck, S3 browser among many others.
How do primary storage clouds and cloud for backup differ?
What’s most important to know about my cloud privacy policy?
Also available on
What this all means and wrap-up
Context matters when it comes to many things particular about objects as they can mean different things. Tools such as S3motion make it easy for moving your buckets or containers along with objects from one cloud storage system, solution or service to another. Also check out EMCcode to see what they are doing on different fronts from supporting new and greenfield development with Cloud Foundry and PaaS to Openstack to bridging current environments to the next generation of platforms. Also check out Kendricks blog site as he has a lot of good technical content as well as some other fun stuff to learn about. Look forward to having Kendrick on as a guest again soon to continue our conversations. In the meantime, check out S3motion to see how it can fit into your server storage I/O tool box.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Cloud Conversations: AWS S3 Cross Region Replication storage enhancements
Amazon Web Services (AWS) recently among other enhancements announced new Simple Storage Service (S3) cross-region replication of objects from a bucket (e.g. container) in one region to a bucket in another region. AWS also recently enhanced Elastic Block Storage (EBS) increasing maximum performance and size of Provisioned IOPS (SSD) and General Purpose (SSD) volumes. EBS enhancements included ability to store up to 16 TBytes of data in a single volume and do 20,000 input/output operations per second (IOPS). Read more about EBS and other recent AWS server, storage I/O and application enhancements here.
The Problem, Issue, Challenge, Opportunity and Need
The challenge is being able to move data (e.g. objects) stored in AWS buckets in one region to another in a safe, secure, timely, automated, cost-effective way.
Even though AWS has a global name-space, buckets and their objects (e.g. files, data, videos, images, bit and byte streams) are stored in a specific region designated by the customer or user (AWS S3, EBS, EC2, Glacier, Regions and Availability Zone primer can be found here).
Understanding the challenge and designing a strategy
The following diagram shows the challenge and how to copy or replicate objects in an S3 bucket in one region to a destination bucket in a different region. While objects can be copied or replicated without S3 cross-region replication, that involves essentially reading your objects pulling that data out via the internet and then writing to another place. The catch is that this can add extra costs, take time, consume network bandwidth and need extra tools (Cloudberry, Cyberduck, S3fuse, S3motion, S3browser, S3 tools (not AWS) and a long list of others).
What is AWS S3 Cross-region replication
Highlights of AWS S3 Cross-region replication include:
AWS S3 Cross region replication is as its name implies, replication of S3 objects from a bucket in one region to a destination bucket in another region.
S3 replication of new objects added to an existing or new bucket (note new objects get replicated)
Policy based replication tied into S3 versioning and life-cycle rules
Quick and easy to set up for use in a matter of minutes via S3 dashboard or other interfaces
Keeps region to region data replication and movement within AWS networks (potential cost advantage)
To activate, you simply enable versioning on a bucket, enable cross-region replication, indicate source bucket (or prefix of objects in bucket), specify destination region and target bucket name (or create one), then create or select an IAM (Identify Access Management) role and objects should be replicated.
Some AWS S3 cross-region replication things to keep in mind (e.g. considerations):
As with other forms of mirroring and replication if you add something on one side it gets replicated to other side
As with other forms of mirroring and replication if you deleted something from the other side it can be deleted on both (be careful and do some testing)
Keep costs in perspective as you still need to pay for your S3 storage at both locations as well as applicable internal data transfer and GET fees
Click here to see current AWS S3 fees for various regions
S3 Cross-region replication and alternative approaches
There are several regions around the world and up until today AWS customers could copy, sync or replicate S3 bucket contents between AWS regions manually (or via automation) using various tools such as Cloudberry, Cyberduck, S3browser and S3motion to name just a few as well as via various gateways and other technologies. Some of those tools and technologies are open-source or free, some are freemium and some are premium for a few that also vary by interface (some with GUI, others with CLI or APIs) including ability to mount an S3 bucket as a local network drive and use tools to sync or copy.
However a catch with the above mentioned tools (among others) and approaches is that to replicate your data (e.g. objects in a bucket) can involve other AWS S3 fees. For example reading data (e.g. a GET which has a fee) from one AWS region and then copying out to the internet has fees. Likewise when copying data into another AWS S3 region (e.g. a PUT which are free) there is also the cost of storage at the destination.
AWS S3 cross-region hands on experience (first look)
For my first hands on (first look) experience with AWS cross-region replication today I enabled a bucket in the US Standard region (e.g. Northern Virginia) and created a new target destination bucket in the EU Ireland. Setup and configuration was very quick, literally just a few minutes with most of the time spent reading the text on the new AWS S3 dashboard properties configuration displays.
I selected an existing test bucket to replicate and noticed that nothing had replicated over to the other bucket until I realized that new objects would be replicated. Once some new objects were added to the source bucket within a matter of moments (e.g. few minutes) they appeared across the pond in my EU Ireland bucket. When I deleted those replicated objects from my EU Ireland bucket and switched back to my view of the source bucket in the US, those new objects were already deleted from the source. Yes, just like regular mirroring or replication, pay attention to how you have things configured (e.g. synchronized vs. contribute vs. echo of changes etc.).
While I was not able to do a solid quantifiable performance test, simply based on some quick copies and my network speed moving via S3 cross-region replication was faster than using something like s3motion with my server in the middle.
It also appears from some initial testing today that a benefit of AWS S3 cross-region replication (besides being bundled and part of AWS) is that some fees to pull data out of AWS and transfer out via the internet can be avoided.
Where to learn more
Here are some links to learn more about AWS S3 and related topics
How do primary storage clouds and cloud for backup differ?
What’s most important to know about my cloud privacy policy?
What this all means and wrap-up
For those who are looking for a way to streamline replicating data (e.g. objects) from an AWS bucket in one region with a bucket in a different region you now have a new option. There are potential cost savings if that is your goal along with performance benefits in addition to using what ever might be working in your environment. Replicating objects provides a way of expanding your business continuance (BC), business resiliency (BR) and disaster recovery (DR) involving S3 across regions as well as a means for content cache or distribution among other possible uses.
Overall, I like this ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS as well as among other public cloud serves and local resources.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
How to test your HDD SSD AFA Hybrid or cloud storage
Updated 2/14/2018
Over at BizTech Magazine I have a new article 4 Ways to Performance Test Your New HDD or SSD that provides a quick guide to verifying or learning what the speed characteristic of your new storage device are capable of.
To some the above (read the full article here) may seem like common sense tips and things everybody should know otoh there are many people who are new to servers storage I/O networking hardware software cloud virtual along with various applications, not to mention different tools.
Thus the above is a refresher for some (e.g. Dejavu) while for others it might be new and revolutionary or simply helpful. Interested in HDD’s, SSD’s as well as other server storage I/O performance along with benchmarking tools, techniques and trends check out the collection of links here (Server and Storage I/O Benchmarking and Performance Resources).
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Server Storage I/O Benchmarking Performance Resource Tools
Updated 1/23/2018
Server storage I/O benchmark performance resource tools, various articles and tips. These include tools for legacy, virtual, cloud and software defined environments.
The best server and storage I/O (input/output operation) is the one that you do not have to do, the second best is the one with the least impact.
This is where the idea of locality of reference (e.g. how close is the data to where your application is running) comes into play which is implemented via tiered memory, storage and caching shown in the figure above.
Server storage I/O performance applies to cloud, virtual, software defined and legacy environments
What this has to do with server storage I/O (and networking) performance benchmarking is keeping the idea of locality of reference, context and the application workload in perspective regardless of if cloud, virtual, software defined or legacy physical environments.
Various Server Storage I/O tools in a hadoop environment
Michael-noll.com: Benchmarking and Stress Testing an Hadoop Cluster With TeraSort, TestDFSIO Virtualization Practice: SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD StorageIOblog: Storage and IO metrics that matter InfoStor: Storage Metrics and Measurements That Matter: Getting Started SilvertonConsulting: Storage throughput vs. IO response time and why it matters Splunk: The percentage of Read / Write utilization to get to 800 IOPS?
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
There are many different types of server storage I/O iops associated with various environments, applications and workloads. Some I/Os activity are iops, others are transactions per second (TPS), files or messages per time (hour, minute, second), gets, puts or other operations. The best IO is one you do not have to do.
What about all the cloud, virtual, software defined and legacy based application that still need to do I/O?
If no IO operation is the best IO, then the second best IO is the one that can be done as close to the application and processor as possible with the best locality of reference.
Also keep in mind that aggregation (e.g. consolidation) can cause aggravation (server storage I/O performance bottlenecks).
Example of aggregation (consolidation) causing aggravation (server storage i/o blender bottlenecks)
And the third best?
It’s the one that can be done in less time or at least cost or effect to the requesting application, which means moving further down the memory and storage stack.
Leveraging flash SSD and cache technologies to find and fix server storage I/O bottlenecks
On the other hand, any IOP regardless of if for block, file or object storage that involves some context is better than those without, particular involving metrics that matter (here, here and here [webinar] )
Server Storage I/O optimization and effectiveness
The problem with IO’s is that they are a basic operations to get data into and out of a computer or processor, so there’s no way to avoid all of them, unless you have a very large budget. Even if you have a large budget that can afford an all flash SSD solution, you may still meet bottlenecks or other barriers.
IO’s require CPU or processor time and memory to set up and then process the results as well as IO and networking resources to move data too their destination or retrieve them from where they are stored. While IO’s cannot be eliminated, their impact can be greatly improved or optimized by, among other techniques, doing fewer of them via caching and by grouping reads or writes (pre-fetch, write-behind).
Think of it this way: Instead of going on multiple errands, sometimes you can group multiple destinations together making for a shorter, more efficient trip. However, that optimization may also mean your drive will take longer. So, sometimes it makes sense to go on a couple of quick, short, low-latency trips instead of one larger one that takes half a day even as it accomplishes many tasks. Of course, how far you have to go on those trips (i.e., their locality) makes a difference about how many you can do in a given amount of time.
Locality of reference (or proximity)
What is locality of reference?
This refers to how close (i.e., its place) data exists to where it is needed (being referenced) for use. For example, the best locality of reference in a computer would be registers in the processor core, ready to be acted on immediately. This would be followed by levels 1, 2, and 3 (L1, L2, and L3) onboard caches, followed by main memory, or DRAM. After that comes solid-state memory typically NAND flash either on PCIe cards or accessible on a direct attached storage (DAS), SAN, or NAS device.
Even though a PCIe NAND flash card is close to the processor, there still remains the overhead of traversing the PCIe bus and associated drivers. To help offset that impact, PCIe cards use DRAM as cache or buffers for data along with meta or control information to further optimize and improve locality of reference. In other words, this information is used to help with cache hits, cache use, and cache effectiveness vs. simply boosting cache use.
SSD to the rescue?
What can you do the cut the impact of IO’s?
There are many steps one can take, starting with establishing baseline performance and availability metrics.
The metrics that matter include IOP’s, latency, bandwidth, and availability. Then, leverage metrics to gain insight into your application’s performance.
Understand that IO’s are a fact of applications doing work (storing, retrieving, managing data) no matter whether systems are virtual, physical, or running up in the cloud. But it’s important to understand just what a bad IO is, along with its impact on performance. Try to identify those that are bad, and then find and fix the problem, either with software, application, or database changes. Perhaps you need to throw more software caching tools, hypervisors, or hardware at the problem. Hardware may include faster processors with more DRAM and faster internal busses.
Leveraging local PCIe flash SSD cards for caching or as targets is another option.
You may want to use storage systems or appliances that rely on intelligent caching and storage optimization capabilities to help with performance, availability, and capacity.
Where to gain insight into your server storage I/O environment
There are many tools that you can be used to gain insight into your server storage I/O environment across cloud, virtual, software defined and legacy as well as from different layers (e.g. applications, database, file systems, operating systems, hypervisors, server, storage, I/O networking). Many applications along with databases have either built-in or optional tools from their provider, third-party, or via other sources that can give information about work activity being done. Likewise there are tools to dig down deeper into the various data information infrastructure to see what is happening at the various layers as shown in the following figures.
Gaining application and operating system level performance insight via different tools
Insight and awareness via operating system tools on Windows and Linux
In the above example, Spotlight on Windows (SoW) which you can download for free from Dell here along with Ubuntu utilities are shown, You could also use other tools to look at server storage I/O performance including Windows Perfmon among others.
Using Visual ESXtop to dig deeper into virtual server storage I/O performance
Gaining insight into virtual server storage I/O cache performance
Wrap up and summary
There are many approaches to address (e.g. find and fix) vs. simply move or mask data center and server storage I/O bottlenecks. Having insight and awareness into how your environment along with applications is important to know to focus resources. Also keep in mind that a bit of flash SSD or DRAM cache in the applicable place can go along way while a lot of cache will also cost you cash. Even if you cant eliminate I/Os, look for ways to decrease their impact on your applications and systems.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Revisiting RAID data protection remains relevant and resources
Updated 2/10/2018
RAID data protection remains relevant including erasure codes (EC), local reconstruction codes (LRC) among other technologies. If RAID were really not relevant anymore (e.g. actually dead), why do some people spend so much time trying to convince others that it is dead or to use a different RAID level or enhanced RAID or beyond raid with related advanced approaches?
When you hear RAID, what comes to mind?
A legacy monolithic storage system that supports narrow 4, 5 or 6 drive wide stripe sets or a modern system support dozens of drives in a RAID group with different options?
RAID means many things, likewise there are different implementations (hardware, software, systems, adapters, operating systems) with various functionality, some better than others.
For example, which of the items in the following figure come to mind, or perhaps are new to your RAID vocabulary?
There are Many Variations of RAID Storage some for the enterprise, some for SMB, SOHO or consumer. Some have better performance than others, some have poor performance for example causing extra writes that lead to the perception that all parity based RAID do extra writes (some actually do write gathering and optimization).
Some hardware and software implementations using WBC (write back cache) mirrored or battery backed-BBU along with being able to group writes together in memory (cache) to do full stripe writes. The result can be fewer back-end writes compared to other systems. Hence, not all RAID implementations in either hardware or software are the same. Likewise, just because a RAID definition shows a particular theoretical implementation approach does not mean all vendors have implemented it in that way.
RAID is not a replacement for backup rather part of an overall approach to providing data availability and accessibility.
What’s the best RAID level? The one that meets YOUR needs
There are different RAID levels and implementations (hardware, software, controller, storage system, operating system, adapter among others) for various environments (enterprise, SME, SMB, SOHO, consumer) supporting primary, secondary, tertiary (backup/data protection, archiving).
General RAID comparisons
Thus one size or approach does fit all solutions, likewise RAID rules of thumbs or guides need context. Context means that a RAID rule or guide for consumer or SOHO or SMB might be different for enterprise and vise versa, not to mention on the type of storage system, number of drives, drive type and capacity among other factors.
General basic RAID comparisons
Thus the best RAID level is the one that meets your specific needs in your environment. What is best for one environment and application may be different from what is applicable to your needs.
Key points and RAID considerations include:
· Not all RAID implementations are the same, some are very much alive and evolving while others are in need of a rest or rewrite. So it is not the technology or techniques that are often the problem, rather how it is implemented and then deployed.
· It may not be RAID that is dead, rather the solution that uses it, hence if you think a particular storage system, appliance, product or software is old and dead along with its RAID implementation, then just say that product or vendors solution is dead.
· RAID can be implemented in hardware controllers, adapters or storage systems and appliances as well as via software and those have different features, capabilities or constraints.
· Long or slow drive rebuilds are a reality with larger disk drives and parity-based approaches; however, you have options on how to balance performance, availability, capacity, and economics.
· RAID can be single, dual or multiple parity or mirroring-based.
· Erasure and other coding schemes leverage parity schemes and guess what umbrella parity schemes fall under.
· RAID may not be cool, sexy or a fun topic and technology to talk about, however many trendy tools, solutions and services actually use some form or variation of RAID as part of their basic building blocks. This is an example of using new and old things in new ways to help each other do more without increasing complexity.
· Even if you are not a fan of RAID and think it is old and dead, at least take a few minutes to learn more about what it is that you do not like to update your dead FUD.
Wait, Isn’t RAID dead?
There is some dead marketing that paints a broad picture that RAID is dead to prop up something new, which in some cases may be a derivative variation of parity RAID.
Data dispersal and durability
RAID continues to evolve with rapid rebuilds for some systems
Otoh, there are some specific products, technologies, implementations that may be end of life or actually dead. Likewise what might be dead, dying or simply not in vogue are specific RAID implementations or packaging. Certainly there is a lot of buzz around object storage, cloud storage, forward error correction (FEC) and erasure coding including messages of how they cut RAID. Catch is that some object storage solutions are overlayed on top of lower level file systems that do things such as RAID 6, granted they are out of sight, out of mind.
General RAID parity and erasure code/FEC comparisons
Then there are advanced parity protection schemes which include FEC and erasure codes that while they are not your traditional RAID levels, they have characteristic including chunking or sharding data, spreading it out over multiple devices with multiple parity (or derivatives of parity) protection.
Bottom line is that for some environments, different RAID levels may be more applicable and alive than for others.
Make Use of Applicable Technologies and Techniques
If RAID is alive, what to do with it?
If you are new to RAID, learn more about the past, present and future keeping mind context. Keeping context in mind means that there are different RAID levels and implementations for various environments. Not all RAID 0, 1, 1/0, 10, 2, 3, 4, 5, 6 or other variations (past, present and emerging) are the same for consumer vs. SOHO vs. SMB vs. SME vs. Enterprise, nor are the usage cases. Some need performance for reads, others for writes, some for high-capacity with low performance using hardware or software. RAID Rules of thumb are ok and useful, however keep them in context to what you are doing as well as using.
What to do next?
Take some time to learn, ask questions including what to use when, where, why and how as well as if an approach or recommendation are applicable to your needs. Check out the following links to read some extra perspectives about RAID and keep in mind, what might apply to enterprise may not be relevant for consumer or SMB and vise versa.
That depends, for some things its RAID 1, for others RAID 10 yet for others RAID 4, 5, 6 or DP and yet other situations could be a fit for RAID 0 or erasure codes and FEC. Instead of being focused on just one or two RAID levels as the solution for different problems, I prefer to look at the environment (consumer, SOHO, small or large SMB, SME, enterprise), type of usage (primary or secondary or data protection), performance characteristics, reads, writes, type and number of drives among other factors. What might be a fit for one environment would not be a fit for others, thus my preferred RAID level along with where implemented is the one that meets the given situation. However also keep in mind is tying RAID into part of an overall data protection strategy, remember, RAID is not a replacement for backup.
What this all means
Like other technologies that have been declared dead for years or decades, aka the Zombie technologies (e.g. dead yet still alive) RAID continues to be used while the technologies evolves. There are specific products, implementations or even RAID levels that have faded away, or are declining in some environments, yet alive in others. RAID and its variations are still alive, however how it is used or deployed in conjunction with other technologies also is evolving.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Can we get a side of context with them server storage metrics?
Whats the best server storage I/O network metric or benchmark? It depends as there needs to be some context with them IOPS and other server storage I/O metrics that matter.
In the meantime, let’s get a side of some context with them IOPS from vendors, marketers and their pundits who are tossing them around for server, storage and IO metrics that matter.
Expanding the conversation, the need for more context
The good news is that people are beginning to discuss storage beyond space capacity and cost per GByte, TByte or PByte for both DRAM or nand flash Solid State Devices (SSD), Hard Disk Drives (HDD) along with Hybrid HDD (HHDD) and Solid State Hybrid Drive (SSHD) based solutions. This applies to traditional enterprise or SMB IT data center with physical, virtual or cloud based infrastructures.
This is good because it expands the conversation beyond just cost for space capacity into other aspects including performance (IOPS, latency, bandwidth) for various workload scenarios along with availability, energy effective and management.
Adding a side of context
The catch is that IOPS while part of the equation are just one aspect of performance and by themselves without context, may have little meaning if not misleading in some situations.
Granted it can be entertaining, fun to talk about or simply make good press copy for a million IOPS. IOPS vary in size depending on the type of work being done, not to mention reads or writes, random and sequential which also have a bearing on data throughout or bandwidth (Mbytes per second) along with response time. Not to mention block, file, object or blob as well as table.
However, are those million IOP’s applicable to your environment or needs?
Likewise, what do those million or more IOPS represent about type of work being done? For example, are they small 64 byte or large 64 Kbyte sized, random or sequential, cached reads or lazy writes (deferred or buffered) on a SSD or HDD?
How about the response time or latency for achieving them IOPS?
In other words, what is the context of those metrics and why do they matter?
Metrics that matter give context for example IO sizes closer to what your real needs are, reads and writes, mixed workloads, random or sequential, sustained or bursty, in other words, real world reflective.
As with any benchmark take them with a grain (or more) of salt, they key is use them as an indicator then align to your needs. The tool or technology should work for you, not the other way around.
Here are some examples of context that can be added to help make IOP’s and other metrics matter:
What is the IOP size, are they 512 byte (or smaller) vs. 4K bytes (or larger)?
Are they reads, writes, random, sequential or mixed and what percentage?
How was the storage configured including RAID, replication, erasure or dispersal codes?
Then there is the latency or response time and IO queue depths for the given number of IOPS.
Let us not forget if the storage systems (and servers) were busy with other work or not.
If there is a cost per IOP, is that list price or discount (hint, if discount start negotiations from there)
What was the number of threads or workers, along with how many servers?
What tool was used, its configuration, as well as raw or cooked (aka file system) IO?
Was the IOP’s number with one worker or multiple workers on a single or multiple servers?
Did the IOP’s number come from a single storage system or total of multiple systems?
Fast storage needs fast serves and networks, what was their configuration?
Was the performance a short burst, or long sustained period?
What was the size of the test data used; did it all fit into cache?
Were write data committed synchronously to storage, or deferred (aka lazy writes used)?
The above are just a sampling and not all may be relevant to your particular needs, however they help to put IOP’s into more contexts. Another consideration around IOPS are the configuration of the environment, from an actual running application using some measurement tool, or are they generated from a workload tool such as IOmeter, IOrate, VDbench among others.
Sure, there are more contexts and information that would be interesting as well, however learning to walk before running will help prevent falling down.
Does size or age of vendors make a difference when it comes to context?
Some vendors are doing a good job of going for out of this world record-setting marketing hero numbers.
Meanwhile other vendors are doing a good job of adding context to their IOP or response time or bandwidth among other metrics that matter. There is a mix of startup and established that give context with their IOP’s or other metrics, likewise size or age does not seem to matter for those who lack context.
Some vendors may not offer metrics or information publicly, so fine, go under NDA to learn more and see if the results are applicable to your environments.
Likewise, if they do not want to provide the context, then ask some tough yet fair questions to decide if their solution is applicable for your needs.
What this means is let us start putting and asking for metrics that matter such as IOP’s with context.
If you have a great IOP metric, if you want it to matter than include some context such as what size (e.g. 4K, 8K, 16K, 32K, etc.), percentage of reads vs. writes, latency or response time, random or sequential.
IMHO the most interesting or applicable metrics that matter are those relevant to your environment and application. For example if your main application that needs SSD does about 75% reads (random) and 25% writes (sequential) with an average size of 32K, while fun to hear about, how relevant is a million 64 byte read IOPS? Likewise when looking at IOPS, pay attention to the latency, particular if SSD or performance is your main concern.
Get in the habit of asking or telling vendors or their surrogates to provide some context with them metrics if you want them to matter.
So how about some context around them IOP’s (or latency and bandwidth or availability for that matter)?
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
This is the second post of a two-part series looking at storage performance, specifically in the context of drive or device (e.g. mediums) characteristics of How many IOPS can a HDD HHDD SSD do with VMware. In the first post the focus was around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).
Here are some examples to give you some more insight.
For example, the following shows how IOPS vary by changing the percent of reads, writes, random and sequential for a 4K (4,096 bytes or 4 KBytes) IO size with each test step (4 minutes each).
IO Size for test
Workload Pattern of test
Avg. Resp (R+W) ms
Avg. IOP Sec (R+W)
Bandwidth KB Sec (R+W)
4KB
100% Seq 100% Read
0.0
29,736
118,944
4KB
60% Seq 100% Read
4.2
236
947
4KB
30% Seq 100% Read
7.1
140
563
4KB
0% Seq 100% Read
10.0
100
400
4KB
100% Seq 60% Read
3.4
293
1,174
4KB
60% Seq 60% Read
7.2
138
554
4KB
30% Seq 60% Read
9.1
109
439
4KB
0% Seq 60% Read
10.9
91
366
4KB
100% Seq 30% Read
5.9
168
675
4KB
60% Seq 30% Read
9.1
109
439
4KB
30% Seq 30% Read
10.7
93
373
4KB
0% Seq 30% Read
11.5
86
346
4KB
100% Seq 0% Read
8.4
118
474
4KB
60% Seq 0% Read
13.0
76
307
4KB
30% Seq 0% Read
11.6
86
344
4KB
0% Seq 0% Read
12.1
82
330
Dell/Western Digital (WD) 1TB 7200 RPM SATA HDD (Raw IO) thread count 1 4K IO size
In the above example the drive is a 1TB 7200 RPM 3.5 inch Dell (Western Digital) 3Gb SATA device doing raw (non file system) IO. Note the high IOP rate with 100 percent sequential reads and a small IO size which might be a result of locality of reference due to drive level cache or buffering.
Some drives have larger buffers than others from a couple to 16MB (or more) of DRAM that can be used for read ahead caching. Note that this level of cache is independent of a storage system, RAID adapter or controller or other forms and levels of buffering.
Does this mean you can expect or plan on getting those levels of performance?
I would not make that assumption, and thus this serves as an example of using metrics like these in the proper context.
Building off of the previous example, the following is using the same drive however with a 16K IO size.
IO Size for test
Workload Pattern of test
Avg. Resp (R+W) ms
Avg. IOP Sec (R+W)
Bandwidth KB Sec (R+W)
16KB
100% Seq 100% Read
0.1
7,658
122,537
16KB
60% Seq 100% Read
4.7
210
3,370
16KB
30% Seq 100% Read
7.7
130
2,080
16KB
0% Seq 100% Read
10.1
98
1,580
16KB
100% Seq 60% Read
3.5
282
4,522
16KB
60% Seq 60% Read
7.7
130
2,090
16KB
30% Seq 60% Read
9.3
107
1,715
16KB
0% Seq 60% Read
11.1
90
1,443
16KB
100% Seq 30% Read
6.0
165
2,644
16KB
60% Seq 30% Read
9.2
109
1,745
16KB
30% Seq 30% Read
11.0
90
1,450
16KB
0% Seq 30% Read
11.7
85
1,364
16KB
100% Seq 0% Read
8.5
117
1,874
16KB
60% Seq 0% Read
10.9
92
1,472
16KB
30% Seq 0% Read
11.8
84
1,353
16KB
0% Seq 0% Read
12.2
81
1,310
Dell/Western Digital (WD) 1TB 7200 RPM SATA HDD (Raw IO) thread count 1 16K IO size
The previous two examples are excerpts of a series of workload simulation tests (ok, you can call them benchmarks) that I have done to collect information, as well as try some different things out.
The following is an example of the summary for each test output that includes the IO size, workload pattern (reads, writes, random, sequential), duration for each workload step, totals for reads and writes, along with averages including IOP’s, bandwidth and latency or response time.
Want to see more numbers, speeds and feeds, check out the following table which will be updated with extra results as they become available.
Performance characteristics 1 worker (thread count) for RAW IO (non-file system)
Note: (1) Seagate Momentus XT is a Hybrid Hard Disk Drive (HHDD) based on a 7.2K 2.5 HDD with SLC nand flash integrated for read buffer in addition to normal DRAM buffer. This model is a XT I (4GB SLC nand flash), may add an XT II (8GB SLC nand flash) at some future time.
As a starting point, these results are raw IO with file system based information to be added soon along with more devices. These results are for tests with one worker or thread count, other results will be added with such as 16 workers or thread counts to show how those differ.
The above results include all reads, all writes, mix of reads and writes, along with all random, sequential and mixed for each IO size. IO sizes include 4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K, 1024K and 2048K. As with any workload simulation, benchmark or comparison test, take these results with a grain of salt as your mileage can and will vary. For example you will see some what I consider very high IO rates with sequential reads even without file system buffering. These results might be due to locality of reference of IO’s being resolved out of the drives DRAM cache (read ahead) which vary in size for different devices. Use the vendor model numbers in the table above to check the manufactures specs on drive DRAM and other attributes.
If you are used to seeing 4K or 8K and wonder why anybody would be interested in some of the larger sizes take a look at big fast data or cloud and object storage. For some of those applications 2048K may not seem all that big. Likewise if you are used to the larger sizes, there are still applications doing smaller sizes. Sorry for those who like 512 byte or smaller IO’s as they are not included. Note that for all of these unless indicated a 512 byte standard sector or drive format is used as opposed to emerging Advanced Format (AF) 4KB sector or block size. Watch for some more drive and device types to be added to the above, along with results for more workers or thread counts, along with file system and other scenarios.
Using VMware as part of a Server, Storage and IO (aka StorageIO) test platform
The above performance results were generated on Ubuntu 12.04 (since upgraded to 14.04 which was hosted on a VMware vSphere 5.1 (upgraded to 5.5U2) purchased version (you can get the ESXi free version here) with vCenter enabled system. I also have VMware workstation installed on some of my Windows-based laptops for doing preliminary testing of scripts and other activity prior to running them on the larger server-based VMware environment. Other VMware tools include vCenter Converter, vSphere Client and CLI. Note that other guest virtual machines (VMs) were idle during the tests (e.g. other guest VMs were quiet). You may experience different results if you ran Ubuntu native on a physical machine or with different adapters, processors and device configurations among many other variables (that was a disclaimer btw ;) ).
All of the devices (HDD, HHDD, SSD’s including those not shown or published yet) were Raw Device Mapped (RDM) to the Ubuntu VM bypassing VMware file system.
Example of creating an RDM for local SAS or SATA direct attached device.
The above uses the drives address (find by doing a ls -l /dev/disks via VMware shell command line) to then create a vmdk container stored in a dat. Note that the RDM being created does not actually store data in the .vmdk, it’s there for VMware management operations.
If you are not familiar with how to create a RDM of a local SAS or SATA device, check out this post to learn how.This is important to note in that while VMware was used as a platform to support the guest operating systems (e.g. Ubuntu or Windows), the real devices are not being mapped through or via VMware virtual drives.
The above shows examples of RDM SAS and SATA devices along with other VMware devices and dats. In the next figure is an example of a workload being run in the test environment.
One of the advantages of using VMware (or other hypervisor) with RDM’s is that I can quickly define via software commands where a device gets attached to different operating systems (e.g. the other aspect of software defined storage). This means that after a test run, I can quickly simply shutdown Ubuntu, remove the RDM device from that guests settings, move the device just tested to a Windows guest if needed and restart those VMs. All of that from where ever I happen to be working from without physically changing things or dealing with multi-boot or cabling issues.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
How many i/o iops can flash ssd or hdd do with vmware?
Updated 2/10/2018
A common question I run across is how many I/O iopsS can flash SSD or HDD storage device or system do or give.
The answer is or should be it depends.
This is the first of a two-part series looking at storage performance, and in context specifically around drive or device (e.g. mediums) characteristics across HDD, HHDD and SSD that can be found in cloud, virtual, and legacy environments. In this first part the focus is around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).
Lets leave those for a different discussion at another time.
Getting started
Part of my interest in tools, metrics that matter, measurements, analyst, forecasting ties back to having been a server, storage and IO performance and capacity planning analyst when I worked in IT. Another aspect ties back to also having been a sys admin as well as business applications developer when on the IT customer side of things. This was followed by switching over to the vendor world involved with among other things competitive positioning, customer design configuration, validation, simulation and benchmarking HDD and SSD based solutions (e.g. life before becoming an analyst and advisory consultant).
Btw, if you happen to be interested in learn more about server, storage and IO performance and capacity planning, check out my first book Resilient Storage Networks (Elsevier) that has a bit of information on it. There is also coverage of metrics and planning in my two other books The Green and Virtual Data Center (CRC Press) and Cloud and Virtual Data Storage Networking (CRC Press). I have some copies of Resilient Storage Networks available at a special reader or viewer rate (essentially shipping and handling). If interested drop me a note and can fill you in on the details.
There are many rules of thumb (RUT) when it comes to metrics that matter such as IOPS, some that are older while others may be guess or measured in different ways. However the answer is that it depends on many things ranging from if a standalone hard disk drive (HDD), Hybrid HDD (HHDD), Solid State Device (SSD) or if attached to a storage system, appliance, or RAID adapter card among others.
Taking a step back, the big picture
Various HDD, HHDD and SSD’s
Server, storage and I/O performance and benchmark fundamentals
Even if just looking at a HDD, there are many variables ranging from the rotational speed or Revolutions Per Minute (RPM), interface including 1.5Gb, 3.0Gb, 6Gb or 12Gb SAS or SATA or 4Gb Fibre Channel. If simply using a RUT or number based on RPM can cause issues particular with 2.5 vs. 3.5 or enterprise and desktop. For example, some current generation 10K 2.5 HDD can deliver the same or better performance than an older generation 3.5 15K. Other drive factors (see this link for HDD fundamentals) including physical size such as 3.5 inch or 2.5 inch small form factor (SFF), enterprise or desktop or consumer, amount of drive level cache (DRAM). Space capacity of a drive can also have an impact such as if all or just a portion of a large or small capacity devices is used. Not to mention what the drive is attached to ranging from in internal SAS or SATA drive bay, USB port, or a HBA or RAID adapter card or in a storage system.
HDD fundamentals
How about benchmark and performance for marketing or comparison tricks including delayed, deferred or asynchronous writes vs. synchronous or actually committed data to devices? Lets not forget about short stroking (only using a portion of a drive for better IOP’s) or even long stroking (to get better bandwidth leveraging spiral transfers) among others.
Almost forgot, there are also thick, standard, thin and ultra thin drives in 2.5 and 3.5 inch form factors. What’s the difference? The number of platters and read write heads. Look at the following image showing various thickness 2.5 inch drives that have various numbers of platters to increase space capacity in a given density. Want to take a wild guess as to which one has the most space capacity in a given footprint? Also want to guess which type I use for removable disk based archives along with for onsite disk based backup targets (compliments my offsite cloud backups)?
Thick, thin and ultra thin devices
Beyond physical and configuration items, then there are logical configuration including the type of workload, large or small IOPS, random, sequential, reads, writes or mixed (various random, sequential, read, write, large and small IO). Other considerations include file system or raw device, number of workers or concurrent IO threads, size of the target storage space area to decide impact of any locality of reference or buffering. Some other items include how long the test or workload simulation ran for, was the device new or worn in before use among other items.
Tools and the performance toolbox
Then there are the various tools for generating IO’s or workloads along with recording metrics such as reads, writes, response time and other information. Some examples (mix of free or for fee) include Bonnie, Iometer, Iorate, IOzone, Vdbench, TPC, SPC, Microsoft ESRP, SPEC and netmist, Swifttest, Vmark, DVDstore and PCmark 7 among many others. Some are focused just on the storage system and IO path while others are application specific thus exercising servers, storage and IO paths.
Server, storage and IO performance toolbox
Having used Iometer since the late 90s, it has its place and is popular given its ease of use. Iometer is also long in the tooth and has its limits including not much if any new development, never the less, I have it in the toolbox. I also have Futremark PCmark 7 (full version) which turns out has some interesting abilities to do more than exercise an entire Windows PC. For example PCmark can use a secondary drive for doing IO to.
PCmark can be handy for spinning up with VMware (or other tools) lots of virtual Windows systems pointing to a NAS or other shared storage device doing real world type activity. Something that could be handy for testing or stressing virtual desktop infrastructures (VDI) along with other storage systems, servers and solutions. I also have Vdbench among others tools in the toolbox including Iorate which was used to drive the workloads shown below.
What I look for in a tool are how extensible are the scripting capabilities to define various workloads along with capabilities of the test engine. A nice GUI is handy which makes Iometer popular and yes there are script capabilities with Iometer. That is also where Iometer is long in the tooth compared to some of the newer generation of tools that have more emphasis on extensibility vs. ease of use interfaces. This also assumes knowing what workloads to generate vs. simply kicking off some IOPs using default settings to see what happens.
Another handy tool is for recording what’s going on with a running system including IO’s, reads, writes, bandwidth or transfers, random and sequential among other things. This is where when needed I turn to something like HiMon from HyperIO, if you have not tried it, get in touch with Tom West over at HyperIO and tell him StorageIO sent you to get a demo or trial. HiMon is what I used for doing start, stop and boot among other testing being able to see IO’s at the Windows file system level (or below) including very early in the boot or shutdown phase.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Cloud Bulk Big Data Software Defined Object Storage Resources
Welcome to the Cloud, Big Data, Software Defined, Bulk and Object Storage Resources Center Page objectstoragecenter.com.
This object storage resources, along with software defined, cloud, bulk, and scale-out storage page is part of the server StorageIOblog microsite collection of resources. Software-defined, Bulk, Cloud and Object Storage exist to support expanding and diverse application data demands.
Bulk, Cloud, Object Storage Solutions and Services
There are various types of cloud, bulk, and object storage including public services such as Amazon Web Services (AWS) Simple Storage Service (S3), Backblaze, Google, Microsoft Azure, IBM Softlayer, Rackspace among many others. There are also solutions for hybrid and private deployment from Cisco, Cloudian, CTERA, Cray, DDN, Dell EMC, Elastifile, Fujitsu, Vantera/HDS, HPE, Hedvig, Huawei, IBM, NetApp, Noobaa, OpenIO, OpenStack, Quantum, Rackspace, Rozo, Scality, Spectra, Storpool, StorageCraft, Suse, Swift, Virtuozzo, WekaIO, WD, among many others.
Cloud products and services among others, along with associated data infrastructures including object storage, file systems, repositories and access methods are at the center of bulk, big data, big bandwidth and little data initiatives on a public, private, hybrid and community basis. After all, not everything is the same in cloud, virtual and traditional data centers or information factories from active data to in-active deep digital archiving.
Object Context Matters
Before discussing Object Storage lets take a step back and look at some context that can clarify some confusion around the term object. The word object has many different meanings and context, both inside of the IT world as well as outside. Context matters with the term object such as a verb being a thing that can be seen or touched as well as a person or thing of action or feeling directed towards.
Besides a person, place or physical thing, an object can be a software-defined data structure that describes something. For example, a database record describing somebody’s contact or banking information, or a file descriptor with name, index ID, date and time stamps, permissions and access control lists along with other attributes or metadata. Another example is an object or blob stored in a cloud or object storage system repository, as well as an item in a hypervisor, operating system, container image or other application.
Besides being a verb, an object can also be a noun such as disapproval or disagreement with something or someone. From an IT context perspective, an object can also refer to a programming method (e.g. object-oriented programming [oop], or Java [among other environments] objects and classes) and systems development in addition to describing entities with data structures.
In other words, a data structure describes an object that can be a simple variable, constant, complex descriptor of something being processed by a program, as well as a function or unit of work. There are also objects unique or with context to specific environments besides Java or databases, operating systems, hypervisors, file systems, cloud and other things.
The Need For Bulk, Cloud and Object Storage
There is no such thing as an information recession with more data being generated, moved, processed, stored, preserved and served, granted there are economic realities. Likewise as a society our dependence on information being available for work or entertainment, from medical healthcare to social media and all points in between continues to increase (check out the Human Face of Big Data).
Object and cloud storage are in your future, the questions are when, where, with what and how among others.
Watch for more content and links to be added here soon to this object storage center page including posts, presentations, pod casts, polls, perspectives along with services and product solutions profiles.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Storage virtualization along with virtual storage and storage hypervisors have a theme of abstracting underlying physical hardware resources like server virtualization. The abstraction can be for consolidation and aggregation, or for enabling agility, flexibility, emulation and other functionality.
Storage virtualization can be implemented in different locations, in many ways with various functionality and focus. For example the abstraction can occur on a server, in an virtual or physical appliance (e.g. tin wrapped software), in a network switch or router, as well as in a storage system. The focus can be for aggregation, or data protection (HA, BC, DR, backup, replication, snapshot) on a homogeneous (all one vendor) or mixed vendor basis (heterogeneous).
Here is a link to a guest post that I recently did over at The Virtualization Practice looking at storage hypervisors, virtual storage and storage virtualization. As is the case with virtual storage, storage virtualization, storage for virtual environments, depending on your views, spheres of influence, preferences among other factors what you call a storage hypervisor will probably vary.
Additional related material:
Are you using or considering implementation of a storage hypervisor?
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
In the spirit of Halloween and zombies season, a couple of thoughts come to mind about vendor tricks and treats. This is an industry trends and perspectives post, part of an ongoing series looking at various technology and fun topics.
The first trick or treat game pertains to the blame game; you know either when something breaks, or at the other extreme, before you have even made a decision to buy something. The trick or treat game for decision-making goes something like this.
Vendor “A” says products succeed with their solution while failure results with a solution from “B” when doing “X”. Otoh, vendor “B” claims that “X” will fail when using a solution from vendor “A”. In fact, you can pick what you want to substitute for “X”, perhaps VDI, PCIe, Big Data, Little Data, Backup, Archive, Analytics, Private Cloud, Public Cloud, Hybrid Cloud, eDiscovery you name it.
This is not complicated math or big data problem requiring a high-performance computing (HPC) platform. A HPC Zetta-Flop processing ability using 512 bit addressing of 9.9 (e.g. 1 nine) PettaBytes of battery-backed DRAM and an IO capability of 9.99999 (e.g. 5 9’s) trillion 8 bit IOPS to do table pivots or runge kutta numerical analysis, map reduce, SAS or another modeling with optional iProduct or Android interface are not needed.
StorageIO images of touring Texas Advanced Computing (e.g. HPC) Center
Can you solve this equation? Hint it does not need a PhD or any other advanced degree. Another hint, if you have ever been at any side of the technology product and services decision-making table, regardless of the costume you wore, you should know the answer.
Of course the question of would “X” fail regardless of who or what “A” or “B” let alone a “C”, “D” or “F”? In other words, it is not the solution, technology, vendor or provider, rather the problem or perhaps even lack thereof that is the issue. Or is it a case where there is a solution from “A”, “B” or any others that is looking for a problem, and if it is the wrong problem, there can be a wrong solution thus failure?
Another trick or treat game is vendors public relations (PR) or analyst relations (AR) people to ask for one thing and delivery or ask another. For example, some vendor, service provider, their marketing AR and PR people or surrogates make contact wanting to tell of various success and failure story. Of course, this is usually their success and somebody else’s failure, or their victory over something or someone who sometimes can be interesting. Of course, there are also the treats to get you to listen to the above, such as tempt you with a project if you meet with their subject, which may be a trick of a disappearing treat (e.g. magic, poof it is gone after the discussion).
There are another AR and PR trick and treat where they offer on behalf of their representative organization or client to a perspective or exclusive insight on their competitor. Of course, the treat from their perspective is that they will generously expose all that is wrong with what a competitor is saying about their own (e.g. the competitors) product.
Let me get this straight, I am not supposed to believe what somebody says about his or her own product, however, supposed to believe what a competitor says is wrong with the competition’s product, and what is right with his or her own product.
Hmm, ok, so let me get this straight, a competitor say “A” wants to tell me what somebody say from “B” has told me is wrong and I should schedule a visit with a truth squad member from “A” to get the record set straight about “B”?
Does that mean then that I go to “B” for a rebuttal, as well as an update about “A” from “B”, assuming that what “A” has told me is also false about themselves, and perhaps about “B” or any other?
Too be fair, depending on your level of trust and confidence in either a vendor, their personal or surrogates, you might tend to believe more from them vs. others, or at least until you been tricked after given treats. There may be some that have been tricked, or they tried applying to many treats to present a story that behind the costume might be a bit scary.
Having been through enough of these, and I candidly believe that sometimes “A” or “B” or any other party actually do believe that they have more or better info about their competitor and that they can convince somebody about what their competitor is doing better than the competitor can. I also believe that there are people out there who will go to “A” or “B” and believe what they are told by based on their preference, bias or interests.
When I hear from vendors, VARs, solution or service providers and others, it’s interesting hearing point, counterpoint and so forth, however if time is limited, I’am more interested in hearing from such as “A” about them, what they are doing, where success, where challenges, where going and if applicable, under NDA go into more detail.
Customer success stories are good, however again, if interested in what works, what kind of works, or what does not work, chances are when looking for G2 vs. GQ, a non-scripted customer conversation or perspective of the good, the bad and the ugly is preferred, even if under NDA. Again, if time is limited which it usually is, focus on what is being done with your solution, where it is going and if compelled send follow-up material that can of course include MUD and FUD about others if that is your preference.
Then there is when during a 30 minute briefing, the vendor or solution provider is still talking about trends, customer pain points, what competitors are doing at 21 minutes into the call with no sign of an announcement, update or news in site
Lets not forget about the trick where the vendor marketing or PR person reaches out and says that the CEO, CMO, CTO or some other CxO or Chief Jailable Officer (CJO) wants to talk with you. Part of the trick is when the CxO actually makes it to the briefing and is not ready, does not know why the call is occurring, or, thinks that a request for an audience has been made with them for an interview or something else.
A treat is when 3 to 4 minutes into a briefing, the vendor or solution provider has already framed up what and why they are doing something. This means getting to what they are announcing or planning on doing and getting into a conversation to discuss what they are doing and making good follow-up content and resources available.
Sometimes a treat is when a briefer goes on autopilot nailing their script for 29 of a 30 minute session then use the last-minute to ask if there are any questions. The reason autopilot briefings can be a treat is when they are going over what is in the slide deck, webex, or press release thus affording an opportunity to get caught up on other things while talk at you. Hmm, perhaps need to consider playing some tricks in reward for those kind of treats? ;)
Do not be scared, not everybody is out to trick you with treats, and not all treats have tricks attached to them. Be prepared, figure out who is playing tricks with treats, and who has treats without tricks.
Oh, and as a former IT customer, vendor and analyst, one of my favorites is contact information of my dogs to vendors who require registration on their websites for basic things such as data sheets. Another is supplying contact information of competing vendors sales reps to vendors who also require registration for basic data sheets or what should otherwise be generally available information as opposed to more premium treats. Of course there are many more fun tricks, however lets leave those alone for now.
Note: Zombie voting rules apply which means vote early, vote often, and of course vote for those who cannot include those that are dead (real or virtual).
Where To Learn More
View additiona related material via the following links.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
This is the second of two posts (here is the first post) that are part of ongoing industry trends and perspectives cloud conversations series that looks at Dell and their cloud strategy story.
Simple, there have been some rather low-key, almost quiet or muddled announcements (also here, here and here) about Dell and Nirvanix collaborating around public cloud storage. Keep in mind that Nirvanix and IBM not too long ago also announced a partnership that some jumped to the conclusion that big blue was about to buy the startup vendor, even though IBM already has other cloud and storage as a service, or backup as a service and DR as a service offerings, what the heck, the more the merrier for big blue?
What about Dell and their partnership with Nirvanix, (more on that in the first post) did somebody jump the gun, or jump the shark?
Is Dell trying to walk the tightrope between being a supplier to major cloud providers while carefully moving into the cloud services market themselves, or are they simply addressing point customer situation or opportunities, at least for the time being?
Alternatively, is this nothing more than Dell establishing another partnership with a technology partner who also happens to be in the services business, similar to what Dell is doing with OpenStack and others?
IMHO Dell has some of the pieces and partnerships and could be a strong contender in the SMB and SME private cloud space, along with VDI and related areas with their Citrix, Microsoft and VMware partnerships. This is also also leveraging their servers and, storage, software, networking and other solutions to supply service providers.
The rest comes down to what markets or areas of focus does Dell want to target, that would in turn dictate how to extend what they already have or what they need to go out and get or partner around.
What say you, what’s your take on Dells cloud strategy story and portfolio?
This is first of a two-part post (click here for second post) that is part of ongoing industry trends and perspective cloud conversations series that looks at Dell and their cloud strategy story. For background, some previous Dell posts are found here, here, here and here. Here is a link that has video of the live Dell Storage Customer Advisory (CAP) panel that Dell asked me to moderate back in June that touches on some related themes and topics. Btw, fwiw and for disclosure Dell AppAssure is a site advertiser on storageioblog.com ;).
If you consider object based storage to be part of or a component of private clouds or at least for medical, healthcare and related focus, then Dell is already there with their DX object storage solutions (Caringo based).
If you view clouds as being part of services provided including via hosting or similar, Dell is already there via their Perot systems acquisitions.
If you view cloud as being part of VDI, or VDI being part of cloud, Dell is there with their tools including various acquisitions and solution bundles.
On the other hand if you view clouds as reference architectures across VMware vSphere, Microsoft Hyper-V and Citrix Xen among others, guess what, Dell is also there with their VIS.
Or, if you view private clouds as being a bundled solution (server, storage, hardware, software) such as EMC vBlock or NetApp FlexPod, then Dell vStart (not to be confused as being a service) is on the list with other infrastructure stack solutions.
How about being a technology supplier to what you may consider as being true cloud providers or enables including those who use OpenStack or other APIs and cloud tools, guess what, Dell is also there including at Rackspace (via public web info).
So the above all comes back to that Dell like many vendors who offer services, solutions and related items for data and information infrastructures have diverse offerings including servers, storage, networking, hardware, software and support. Dell like others similar to them has to find a balance between providing services that compete with their customers, as well as supplier such as to Rackspace. In this case Dell is no different from EMC who happened to move their Mozy backup service off to their VMware subsidiary and has managed to help define where VCE (and here) and ATMOS fit as products while being services capable. IBM has figured this out having a mix of old school services such as SmartCloud Services (or here), IBM Global Services and BCRS (business continuity recovery services), not to mention newer backup and storage cloud services, products and solutions they have acquired, or OEM or have reseller agreements with.
HP has expanded their traditional focused EDS as well as other HP services along with products being joined by their Amazon like Cloud Services including compute, storage and content distribution network (CDN) capabilities. NetApp is taking the partnering route along with Cisco staying focused for at least now on being a partner supplier. Oracle, well Oracle is Oracle and they have a mix of products and services. In fact some might say Oracle is late to the cloud game however they have been in the game since the late 90s when they came out with Oracle online, granted the cloud purist will call that application service provider (e.g. ASP) vs. today’s applications as a service (AaaS) models.