Server and StorageIO Update newsletter – June 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Part II: What I did with Lenovo TS140 in my Server and Storage I/O Review
Part II: Lenovo TS140 Server and Storage I/O Review
This is the second of a two-part post series on my recent experiences with a Lenovo TS140 Server, you can read part I here.
What Did I do with the TS140
After initial check out in an office type environment, I moved the TS140 into the lab area where it joined other servers to be used for various things.
Some of those activities included using the Windows Server 2012 Essentials along with associated admin activities. Also, I also installed VMware ESXi 5.5 and ran into a few surprises. One of those was that I needed to apply an update to VMware drivers to support the onboard Intel NIC, as well as enable VT and EP modes for virtualization to assist via the BIOS. The biggest surprise was that I discovered I could not install VMware onto an internal drive attached via one of the internal SATA ports which turns out to be a BIOS firmware issue.
Lenovo confirmed this when I brought it to their attention, and the workaround is to use USB to install VMware onto a USB flash SSD thumb drive, or other USB attached drive or to use external storage via an adapter. As of this time Lenovo is aware of the VMware issue, however, no date for new BIOS or firmware is available. Speaking of BIOS, I did notice that there was some newer BIOS and firmware available (FBKT70AUS December 2013) than what was installed (FB48A August of 2013). So I went ahead and did this upgrade which was a smooth, quick and easy process. The process included going to the Lenovo site (see resource links below), selecting the applicable download, and then installing it following the directions.
Since I was going to install various PCIe SAS adapters into the TS140 attached to external SAS and SATA storage, this was not a big issue, more of an inconvenience Likewise for using storage mounted internally the workaround is to use an SAS or SATA adapter with internal ports (or cable). Speaking of USB workarounds, have a HDD, HHDD, SSHD or SSD that is a SATA device and need to attach it to USB, then get one of these cables. Note that there are USB 3.0 and USB 2.0 cables (see below) available so choose wisely.
USB to SATA adapter cable
In addition to running various VMware-based workloads with different guest VMs.
I also ran FUTREMARK PCmark (btw, if you do not have this in your server storage I/O toolbox it should be) to gauge the systems performance. As mentioned the TS140 is quiet. However, it also has good performance depending on what processor you select. Note that while the TS140 has a list price as of the time of this post under $400 USD, that will change depending on which processor, amount of memory, software and other options you choose.
PCmark test | Results |
Composite score | 2274 |
Compute | 11530 |
System Storage | 2429 |
Secondary Storage | 2428 |
Productivity | 1682 |
Lightweight | 2137 |
PCmark results are shown above for the Windows Server 2012 system (non-virtualized) configured as shipped and received from Lenovo.
What I liked
Unbelievably quiet which may not seem like a big deal, however, if you are looking to deploy a server or system into a small office workspace, this becomes an important considerations. Otoh, if you are a power user and want a robust server that can be installed into a home media entertainment system, well, this might be a nice to have consideration ;).
Something else that I liked is that the TS140 with the E3-1220 v3 family of processor supports PCIe G3 adapters which are useful if you are going to be using 10GbE cards or 12Gbs SAS and faster cards to move lots of data, support more IOPs or reduce response time latency.
In addition, while only 4 DIMM slots is not very much, its more than what some other similar focused systems have, plus with large capacity DIMMs, you can still get a nice system, or two, or three or four for a cluster at a good price or value (Hmm, VSAN anybody?). Also while not a big item, the TS140 did not require ordering an HDD or SSD if you are not also ordering software the system for a diskless system and have your own.
Speaking of IO slots, naturally I’m interested in Server Storage I/O so having multiple slots is a must have, along with the processor that is quad core (pretty much standard these days) along with VT and EP for supporting VMware (these were disabled in the BIOS. However, that was an easy fix).
Then there is the price as of this posting starting at $379 USD which is a bare bones system (e.g. minimal memory, basic processor, no software) whose price increases as you add more items. What I like about this price is that it has the PCIe G3 slot as well as other PCIe G2 slots for expansion meaning I can install 12Gbps (or 6Gbps) SAS storage I/O adapters, or other PCIe cards including SSD, RAID, 10GbE CNA or other cards to meet various needs including software defined storage.
What I did not like
I would like to have had at least six vs. four DIMM slots, however keeping in mind the price point of where this system is positioned, not to mention what you could do with it thinking outside of the box, I’m fine with only 4 x DIMM. Space for more internal storage would be nice, however, if that is what you need, then there are the larger Lenovo models to look at. By the way, thinking outside of the box, could you do something like a Hadoop, OpenStack, Object Storage, VMware VSAN or other cluster with these in addition to using as a Windows Server?
Yup.
Granted you won’t have as much internal storage, as the TS140 only has two fixed drive slots (for more storage there is the model TD340 among others).
However it is not that difficult to add more (not Lenovo endorsed) by adding a StarTech enclosure like I did with my other systems (see here). Oh and those extra PCIe slots, that’s where a 12Gbs (or 6Gbps) adapter comes into play while leaving room for GbE cards and PCIe SSD cards. Btw not sure what to do with that PCIe x1 slot, that’s a good place for a dual GbE NIC to add more networking ports or an SATA adapter for attaching to larger capacity slower drives.
StarTech 2.5″ SAS SATA drive enclosure via Amazon.com
If VMware is not a requirement, and you need a good entry level server for a large SOHO or small SMB environment, or, if you are looking to add a flexible server to a lab or for other things the TS140 is good (see disclosure below) and quiet.
Otoh as mentioned, there is a current issue with the BIOS/firmware with the TS140 involving VMware (tried ESXi 5 & 5.5).
However I did find a workaround which is that the current TS140 BIOS/Firmware does work with VMware if you install onto a USB drive, and then use external SAS, SATA or other accessible storage which is how I ended up using it.
Lenovo TS140 resources include
Summary
Disclosure: Lenovo loaned the TS140 to me for just under two months including covering shipping costs at no charge (to them or to me) hence this is not a sponsored post or review. On the other hand I have placed an order for a new TS140 similar to the one tested that I bought on-line from Lenovo.
This new TS140 server that I bought joins the Dell Inspiron I added late last year (read more about that here) as well as other HP and Dell systems.
Overall I give the Lenovo TS140 an provisional "A" which would be a solid "A" once the BIOS/firmware issue mentioned above is resolved for VMware. Otoh, if you are not concerned about using the TS140 for VMware (or can do a work around), then consider it as an "A".
As mentioned above, I liked it so much I actually bought one to add to my collection.
Ok, nuff said (for now)
Cheers
Gs
Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved
Server virtualization nested and tiered hypervisors
Server virtualization nested and tiered hypervisors
A few years ago I did a piece (click here) about the then emerging trend of tiered hypervisors, particular using different products or technologies in the same environment.
Tiered snow management tools and technologies
Tiered hypervisors can be as simple as using different technologies such as VMware vSphere/ESXi, Microsoft Hyper-V, KVM or Xen in your environment on different physical machines (PMs) for various business and application purposes. This is similar to having different types or tiers of technology including servers, storage, networks or data protection to meet various needs.
Another aspect is nesting hypervisors on top of each other for testing, development and other purposes.
I use nested VMware ESXi for testing various configurations as well as verifying new software when needed, or creating a larger virtual environment for functionality simulations. If you are new to nesting which is running a hypervisor on top of another hypervisor such as ESXi on ESXi or Hyper-V on ESXi here are a couple of links to get you up to speed. One is a VMware knowledge base piece, two are from William Lam (@lamw) Virtual Ghetto (getting started here and VSAN here) and the other is from Duncan Epping @DuncanYB Yellow Bricks sites.
Recently I did a piece over at FedTech titled 3 Tips for Maximizing Tiered Hypervisors that looks at using multiple virtualization tools for different applications and how they can give a number of benefits.
Here is an excerpt:
Tiered hypervisors can be run in different configurations. For example, an agency can run multiple server hypervisors on the same physical blade or server or on separate servers. Having different tiers or types of hypervisors for server and desktop virtualization is similar to using multiple kinds of servers or storage hardware to meet different needs. Lower-cost hypervisors may have lacked some functionality in the past, but developers often add powerful new capabilities, making them an excellent option. |
IT administrators who are considering the use of tiered or multiple hypervisors should know the answers to these questions:
- How will the different hypervisors be managed?
- Will the environment need new management tools for backup, monitoring, configuration, provisioning or other routine functions?
- Do existing tools offer support for different hypervisors?
- Will the hypervisors have dedicated PMs or be nested?
- How will IT migrate virtual machines and their guests between different hypervisors? For example if using VMware and Hyper-V, will you use VMware vCenter Multi-Hypervisor Manager or something similar?
So how about it, how are you using and managing tiered hypervisors?
Ok, nuff said for now.
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Can we get a side of context with them IOPS server storage metrics?
Can we get a side of context with them server storage metrics?
Whats the best server storage I/O network metric or benchmark? It depends as there needs to be some context with them IOPS and other server storage I/O metrics that matter.
There is an old saying that the best I/O (Input/Output) is the one that you do not have to do.
In the meantime, let’s get a side of some context with them IOPS from vendors, marketers and their pundits who are tossing them around for server, storage and IO metrics that matter.
Expanding the conversation, the need for more context
The good news is that people are beginning to discuss storage beyond space capacity and cost per GByte, TByte or PByte for both DRAM or nand flash Solid State Devices (SSD), Hard Disk Drives (HDD) along with Hybrid HDD (HHDD) and Solid State Hybrid Drive (SSHD) based solutions. This applies to traditional enterprise or SMB IT data center with physical, virtual or cloud based infrastructures.
This is good because it expands the conversation beyond just cost for space capacity into other aspects including performance (IOPS, latency, bandwidth) for various workload scenarios along with availability, energy effective and management.
Adding a side of context
The catch is that IOPS while part of the equation are just one aspect of performance and by themselves without context, may have little meaning if not misleading in some situations.
Granted it can be entertaining, fun to talk about or simply make good press copy for a million IOPS. IOPS vary in size depending on the type of work being done, not to mention reads or writes, random and sequential which also have a bearing on data throughout or bandwidth (Mbytes per second) along with response time. Not to mention block, file, object or blob as well as table.
However, are those million IOP’s applicable to your environment or needs?
Likewise, what do those million or more IOPS represent about type of work being done? For example, are they small 64 byte or large 64 Kbyte sized, random or sequential, cached reads or lazy writes (deferred or buffered) on a SSD or HDD?
How about the response time or latency for achieving them IOPS?
In other words, what is the context of those metrics and why do they matter?
Click on image to view more metrics that matter including IOP’s for HDD and SSD’s
Metrics that matter give context for example IO sizes closer to what your real needs are, reads and writes, mixed workloads, random or sequential, sustained or bursty, in other words, real world reflective.
As with any benchmark take them with a grain (or more) of salt, they key is use them as an indicator then align to your needs. The tool or technology should work for you, not the other way around.
Here are some examples of context that can be added to help make IOP’s and other metrics matter:
- What is the IOP size, are they 512 byte (or smaller) vs. 4K bytes (or larger)?
- Are they reads, writes, random, sequential or mixed and what percentage?
- How was the storage configured including RAID, replication, erasure or dispersal codes?
- Then there is the latency or response time and IO queue depths for the given number of IOPS.
- Let us not forget if the storage systems (and servers) were busy with other work or not.
- If there is a cost per IOP, is that list price or discount (hint, if discount start negotiations from there)
- What was the number of threads or workers, along with how many servers?
- What tool was used, its configuration, as well as raw or cooked (aka file system) IO?
- Was the IOP’s number with one worker or multiple workers on a single or multiple servers?
- Did the IOP’s number come from a single storage system or total of multiple systems?
- Fast storage needs fast serves and networks, what was their configuration?
- Was the performance a short burst, or long sustained period?
- What was the size of the test data used; did it all fit into cache?
- Were short stroking for IOPS or long stroking for bandwidth techniques used?
- Data footprint reduction (DFR) techniques (thin provisioned, compression or dedupe) used?
- Were write data committed synchronously to storage, or deferred (aka lazy writes used)?
The above are just a sampling and not all may be relevant to your particular needs, however they help to put IOP’s into more contexts. Another consideration around IOPS are the configuration of the environment, from an actual running application using some measurement tool, or are they generated from a workload tool such as IOmeter, IOrate, VDbench among others.
Sure, there are more contexts and information that would be interesting as well, however learning to walk before running will help prevent falling down.
Does size or age of vendors make a difference when it comes to context?
Some vendors are doing a good job of going for out of this world record-setting marketing hero numbers.
Meanwhile other vendors are doing a good job of adding context to their IOP or response time or bandwidth among other metrics that matter. There is a mix of startup and established that give context with their IOP’s or other metrics, likewise size or age does not seem to matter for those who lack context.
Some vendors may not offer metrics or information publicly, so fine, go under NDA to learn more and see if the results are applicable to your environments.
Likewise, if they do not want to provide the context, then ask some tough yet fair questions to decide if their solution is applicable for your needs.
Where To Learn More
View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.
- Can we get a side of context with them IOPS and other storage metrics?
- WHEN AND WHERE TO USE NAND FLASH SSD FOR VIRTUAL SERVERS
- Revisiting RAID storage remains relevant and resources
- NVMe overview and primer – Part I
- Part 1 of HDD for content servers series Trends and Content Application Servers
- Part 2 of HDD for content servers series Content application server decisions and testing plans
- Part 3 of HDD for content servers series Test hardware and software configuration
- Part 4 of HDD for content servers series Large file I/O processing
- Part 5 of HDD for content servers series Small file I/O processing
- Part 6 of HDD for content servers series General I/O processing
- Part 7 of HDD for content servers series How HDD continue to evolve over different generations and wrap up
- As the platters spin, HDD’s for cloud, virtual and traditional storage environments
- How many IOPS can a HDD, HHDD or SSD do?
- Hard Disk Drives (HDD) for Virtual Environments
- Server and Storage I/O performance and benchmarking tools
- Server storage I/O performance benchmark workload scripts Part I and Part II
- How to test your HDD, SSD or all flash array (AFA) storage fundamentals
- What is the best server storage I/O workload benchmark? It depends
- I/O, I/O how well do you know about good or bad server and storage I/Os?
- Big Files Lots of Little File Processing Benchmarking with Vdbench
- Part II – NVMe overview and primer (Different Configurations)
- Part III – NVMe overview and primer (Need for Performance Speed)
- Part IV – NVMe overview and primer (Where and How to use NVMe)
- Part V – NVMe overview and primer (Where to learn more, what this all means)
- PCIe Server I/O Fundamentals
- If NVMe is the answer, what are the questions?
- NVMe Wont Replace Flash By Itself
- Via Computerweekly – NVMe discussion: PCIe card vs U.2 and M.2
- Intel and Micron unveil new 3D XPoint Non Volatie Memory (NVM) for servers and storage
- Part II – Intel and Micron new 3D XPoint server and storage NVM
- Part III – 3D XPoint new server storage memory from Intel and Micron
- Server storage I/O benchmark tools, workload scripts and examples (Part I) and (Part II)
- Data Infrastructure Overview, Its Whats Inside of Data Centers
- All You Need To Know about Remote Office/Branch Office Data Protection Backup (free webinar with registration)
- Software Defined, Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI) resources
- The SSD Place (SSD, NVM, PM, SCM, Flash, NVMe, 3D XPoint, MRAM and related topics)
- The NVMe Place (NVMe related topics, trends, tools, technologies, tip resources)
- Data Protection Diaries (Archive, Backup/Restore, BC, BR, DR, HA, RAID/EC/LRC, Replication, Security)
- Software Defined Data Infrastructure Essentials (CRC Press 2017) including SDDC, Cloud, Container and more
- Various Data Infrastructure related events, webinars and other activities
- www.objectstoragecenter.com and Software Defined, Cloud, Bulk and Object Storage Fundamentals
- Server Storage I/O Network PCIe Fundamentals
Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.
What This All Means
What this means is let us start putting and asking for metrics that matter such as IOP’s with context.
If you have a great IOP metric, if you want it to matter than include some context such as what size (e.g. 4K, 8K, 16K, 32K, etc.), percentage of reads vs. writes, latency or response time, random or sequential.
IMHO the most interesting or applicable metrics that matter are those relevant to your environment and application. For example if your main application that needs SSD does about 75% reads (random) and 25% writes (sequential) with an average size of 32K, while fun to hear about, how relevant is a million 64 byte read IOPS? Likewise when looking at IOPS, pay attention to the latency, particular if SSD or performance is your main concern.
Get in the habit of asking or telling vendors or their surrogates to provide some context with them metrics if you want them to matter.
So how about some context around them IOP’s (or latency and bandwidth or availability for that matter)?
Ok, nuff said, for now.
Gs
Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
HDS Mid Summer Storage and Converged Compute Enhancements
Converged Compute, SSD Storage and Clouds
Hitachi Data Systems (HDS) announced today several enhancements to their data storage and unified compute portfolio as part of their Maximize I.T. initiative.
Setting the context
As part of setting the stage for this announcement, HDS has presented the following strategy vision as part their vision for IT transformation and cloud computing.
What was announced
This announcement builds on earlier ones around HDS Unified Storage (HUS) primary storage using nand flash MLC Solid State Devices (SSD) and Hard Disk Drives (HDD’s), along with unified block and file (NAS), as well Unified Compute Platform (UCP) also known as converged compute, networking, storage and software. These enhancements follow recent updates to the HDS Content Platform (HCP) for object, file and content storage.
There are three main focus areas of the announcement:
- Flash SSD storage enhancements for HUS
- Unified with enhanced file (aka BlueArc based)
- Enhanced unified compute (UCP)
HDS Flash SSD acceleration
The question should not be if SSD is in your future, rather when, where, with what and how much will be needed.
As part of this announcement, HDS is releasing an all flash SSD based HUS enterprise storage system. Similar to what other vendors have done, HDS is attaching flash SSD storage to their HUS systems in place of HDD’s. Hitachi has developed their own SSD module announced in 2012 (read more here). The HDS SSD module use Multi Level Cell (MLC) nand flash chips (dies) that now supports 1.6TB of storage space capacity unit. This is different from other vendors who either use nand flash SSD drive form factor devices (e.g. Intel, Micron, Samsung, SANdisk, Seagate, STEC (now WD), WD among others) or, PCIe form factor cards (e.g. FusionIO, Intel, LSI, Micron, Virident among others) or, attach a third-party external SSD device (e.g. IBM/TMS, Violin, Whiptail etc.).
Like some other vendors, HDS has also done more than simply attach a SSD (drive, PCIe card, or external device) to their storage systems calling it an integrated solution. What this means is that HDS has implemented software or firmware changes into their storage systems to manage durability and extend flash duty cycles caused by program erase (P/E) cycle wear. In addition HDS has implemented performance optimization in their storage systems to leverage the faster SSD modules, after all, faster storage media or devices need fast storage systems or controllers.
While the new all flash storage system can be initially bought with just SSD, similar to other hybrid storage solutions, hard disk drives (HDD’s) can also be installed. For enabling full performance at low latency, HDS is addressing both the flash SSD modules as well as the storage systems they attach to including back-end, front-end and caching in-between.
The release enables 500,000 or half a million IOPS (no IOP size, reads or writes, random or sequential. Future firmware (non-disrupted) to enable higher performance that HDS is claiming will be 1,000,000 IOPS at under a millisecond) were indicated.
In addition to future performance improvements, HDS is also indicating increased storage space capacity of its MLC flash SSD modules (1.6TB today). Using 12 modules (1.6TB each), 154TB of flash SSD can be placed in a single rack.
HDS File and Network Attached Storage (NAS)
HUS unified NAS file system and gateway (BlueArc based) enhancements include:
- New platforms leveraging faster processors (both Intel and Field Programmable Gate Arrays (FPGA’s))
- Common management and software tools from 3000 to new 4000 series
- Bandwidth doubled with faster connections and more memory
- Four 10GbE NAS serving ports (front-end)
- Four 8Gb Fibre Channel ports (back-end)
- FPGA leveraged for off-loading some dedupe functions (faster performance)
HDS Unified Complete Platform (UCP)
As part of this announcement, HDS is enhancing the Unified Compute Platform (UCP) offerings. HDS re-entered the compute market in 2012 joining other vendors offering unified compute, storage and networking solutions. The HDS converged data infrastructure competes with AMD (Seamicro) SM15000, Dell vStart and VRTX (for lower end market), EMC and VCE vBlock, NetApp FlexPod along with those from HP (or Moonshot micro servers), IBM Puresystems, Oracle and others.
UCP Pro for VMware vSphere
- Turnkey converged solution (Compute, Networking, Storage, Software)
- Includes VMware vSphere pre-installed (OEM from VMware)
- Flexible compute blade options
- Three storage system options (HUS, HUS VM and VSP)
- Cisco and Brocade IP networking
- UCP Director 3.0 with enhanced automation and orchestration software
UCP Select for Microsoft Private Cloud
- Supports Hyper-V 3.0 server virtualization
- Live migration with DR and resynch
- Microsoft Fast Track certified
UCP Select for Oracle RAC
- HDS Flash SSD storage
- SMP x86 compute for performance
- 2x improvements for IOPS less than 1 millisecond
- Common management with HiCommand suite
- Integrated with Oracle RMAN and OVM
UCP Select for SAP HANA
- Scale out to 8TBs memory (DRAM)
- Tier 1 storage system certified for SAP HANA DR
- Leverages SAP HANA SAP storage connector API
What this all means?
With these announcements HDS is extending its storage centric hardware, software and services solution portfolio for block, file and object access across different usage tiers (systems, applications, mediums). HDS is also expanding their converged unified compute platforms to stay competitive with others including Dell, EMC, Fujitsu, HP, IBM, NEC, NetApp and Oracle among others. For environments with HDS storage looking for converged solutions to support VMware, Microsoft Hyper-V, Oracle or SAP HANA these UCP systems are worth checking out as part of evaluating vendor offerings. Likewise for those who have HDS storage exploring SSD offerings, these announcements give opportunities to enable consolidation as do the unified file (NAS) offerings.
Note that now HDS does not have a public formalized message or story around PCIe flash cards, however they have relationships with various vendors as part of their UCP offerings.
Overall a good set of incremental enhancements for HDS to stay competitive and leverage their field proven capabilities including management software tools.
Ok, nuff said
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Part II: How many IOPS can a HDD HHDD SSD do with VMware?
How many IOPS can a HDD HHDD SSD do with VMware?
Updated 2/10/2018
This is the second post of a two-part series looking at storage performance, specifically in the context of drive or device (e.g. mediums) characteristics of How many IOPS can a HDD HHDD SSD do with VMware. In the first post the focus was around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).
A common question is how many IOPS (IO Operations Per Second) can a storage device or system do?
The answer is or should be it depends.
Here are some examples to give you some more insight.
For example, the following shows how IOPS vary by changing the percent of reads, writes, random and sequential for a 4K (4,096 bytes or 4 KBytes) IO size with each test step (4 minutes each).
IO Size for test | Workload Pattern of test | Avg. Resp (R+W) ms | Avg. IOP Sec (R+W) | Bandwidth KB Sec (R+W) |
4KB | 100% Seq 100% Read | 0.0 | 29,736 | 118,944 |
4KB | 60% Seq 100% Read | 4.2 | 236 | 947 |
4KB | 30% Seq 100% Read | 7.1 | 140 | 563 |
4KB | 0% Seq 100% Read | 10.0 | 100 | 400 |
4KB | 100% Seq 60% Read | 3.4 | 293 | 1,174 |
4KB | 60% Seq 60% Read | 7.2 | 138 | 554 |
4KB | 30% Seq 60% Read | 9.1 | 109 | 439 |
4KB | 0% Seq 60% Read | 10.9 | 91 | 366 |
4KB | 100% Seq 30% Read | 5.9 | 168 | 675 |
4KB | 60% Seq 30% Read | 9.1 | 109 | 439 |
4KB | 30% Seq 30% Read | 10.7 | 93 | 373 |
4KB | 0% Seq 30% Read | 11.5 | 86 | 346 |
4KB | 100% Seq 0% Read | 8.4 | 118 | 474 |
4KB | 60% Seq 0% Read | 13.0 | 76 | 307 |
4KB | 30% Seq 0% Read | 11.6 | 86 | 344 |
4KB | 0% Seq 0% Read | 12.1 | 82 | 330 |
Dell/Western Digital (WD) 1TB 7200 RPM SATA HDD (Raw IO) thread count 1 4K IO size
In the above example the drive is a 1TB 7200 RPM 3.5 inch Dell (Western Digital) 3Gb SATA device doing raw (non file system) IO. Note the high IOP rate with 100 percent sequential reads and a small IO size which might be a result of locality of reference due to drive level cache or buffering.
Some drives have larger buffers than others from a couple to 16MB (or more) of DRAM that can be used for read ahead caching. Note that this level of cache is independent of a storage system, RAID adapter or controller or other forms and levels of buffering.
Does this mean you can expect or plan on getting those levels of performance?
I would not make that assumption, and thus this serves as an example of using metrics like these in the proper context.
Building off of the previous example, the following is using the same drive however with a 16K IO size.
IO Size for test | Workload Pattern of test | Avg. Resp (R+W) ms | Avg. IOP Sec (R+W) | Bandwidth KB Sec (R+W) |
16KB | 100% Seq 100% Read | 0.1 | 7,658 | 122,537 |
16KB | 60% Seq 100% Read | 4.7 | 210 | 3,370 |
16KB | 30% Seq 100% Read | 7.7 | 130 | 2,080 |
16KB | 0% Seq 100% Read | 10.1 | 98 | 1,580 |
16KB | 100% Seq 60% Read | 3.5 | 282 | 4,522 |
16KB | 60% Seq 60% Read | 7.7 | 130 | 2,090 |
16KB | 30% Seq 60% Read | 9.3 | 107 | 1,715 |
16KB | 0% Seq 60% Read | 11.1 | 90 | 1,443 |
16KB | 100% Seq 30% Read | 6.0 | 165 | 2,644 |
16KB | 60% Seq 30% Read | 9.2 | 109 | 1,745 |
16KB | 30% Seq 30% Read | 11.0 | 90 | 1,450 |
16KB | 0% Seq 30% Read | 11.7 | 85 | 1,364 |
16KB | 100% Seq 0% Read | 8.5 | 117 | 1,874 |
16KB | 60% Seq 0% Read | 10.9 | 92 | 1,472 |
16KB | 30% Seq 0% Read | 11.8 | 84 | 1,353 |
16KB | 0% Seq 0% Read | 12.2 | 81 | 1,310 |
Dell/Western Digital (WD) 1TB 7200 RPM SATA HDD (Raw IO) thread count 1 16K IO size
The previous two examples are excerpts of a series of workload simulation tests (ok, you can call them benchmarks) that I have done to collect information, as well as try some different things out.
The following is an example of the summary for each test output that includes the IO size, workload pattern (reads, writes, random, sequential), duration for each workload step, totals for reads and writes, along with averages including IOP’s, bandwidth and latency or response time.
Want to see more numbers, speeds and feeds, check out the following table which will be updated with extra results as they become available.
Device | Vendor | Make | Model | Form Factor | Capacity | Interface | RPM Speed | Raw Test Result |
HDD | HGST | Desktop | HK250-160 | 2.5 | 160GB | SATA | 5.4K | |
HDD | Seagate | Mobile | ST2000LM003 | 2.5 | 2TB | SATA | 5.4K | |
HDD | Fujitsu | Desktop | MHWZ160BH | 2.5 | 160GB | SATA | 7.2K | |
HDD | Seagate | Momentus | ST9160823AS | 2.5 | 160GB | SATA | 7.2K | |
HDD | Seagate | MomentusXT | ST95005620AS | 2.5 | 500GB | SATA | 7.2K(1) | |
HDD | Seagate | Barracuda | ST3500320AS | 3.5 | 500GB | SATA | 7.2K | |
HDD | WD/Dell | Enterprise | WD1003FBYX | 3.5 | 1TB | SATA | 7.2K | |
HDD | Seagate | Barracuda | ST3000DM01 | 3.5 | 3TB | SATA | 7.2K | |
HDD | Seagate | Desktop | ST4000DM000 | 3.5 | 4TB | SATA | HDD | |
HDD | Seagate | Capacity | ST6000NM00 | 3.5 | 6TB | SATA | HDD | |
HDD | Seagate | Capacity | ST6000NM00 | 3.5 | 6TB | 12GSAS | HDD | |
HDD | Seagate | Savio 10K.3 | ST9300603SS | 2.5 | 300GB | SAS | 10K | |
HDD | Seagate | Cheetah | ST3146855SS | 3.5 | 146GB | SAS | 15K | |
HDD | Seagate | Savio 15K.2 | ST9146852SS | 2.5 | 146GB | SAS | 15K | |
HDD | Seagate | Ent. 15K | ST600MP0003 | 2.5 | 600GB | SAS | 15K | |
SSHD | Seagate | Ent. Turbo | ST600MX0004 | 2.5 | 600GB | SAS | SSHD | |
SSD | Samsung | 840 PRo | MZ-7PD256 | 2.5 | 256GB | SATA | SSD | |
HDD | Seagate | 600 SSD | ST480HM000 | 2.5 | 480GB | SATA | SSD | |
SSD | Seagate | 1200 SSD | ST400FM0073 | 2.5 | 400GB | 12GSAS | SSD | |
Performance characteristics 1 worker (thread count) for RAW IO (non-file system)
Note: (1) Seagate Momentus XT is a Hybrid Hard Disk Drive (HHDD) based on a 7.2K 2.5 HDD with SLC nand flash integrated for read buffer in addition to normal DRAM buffer. This model is a XT I (4GB SLC nand flash), may add an XT II (8GB SLC nand flash) at some future time.
As a starting point, these results are raw IO with file system based information to be added soon along with more devices. These results are for tests with one worker or thread count, other results will be added with such as 16 workers or thread counts to show how those differ.
The above results include all reads, all writes, mix of reads and writes, along with all random, sequential and mixed for each IO size. IO sizes include 4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K, 1024K and 2048K. As with any workload simulation, benchmark or comparison test, take these results with a grain of salt as your mileage can and will vary. For example you will see some what I consider very high IO rates with sequential reads even without file system buffering. These results might be due to locality of reference of IO’s being resolved out of the drives DRAM cache (read ahead) which vary in size for different devices. Use the vendor model numbers in the table above to check the manufactures specs on drive DRAM and other attributes.
If you are used to seeing 4K or 8K and wonder why anybody would be interested in some of the larger sizes take a look at big fast data or cloud and object storage. For some of those applications 2048K may not seem all that big. Likewise if you are used to the larger sizes, there are still applications doing smaller sizes. Sorry for those who like 512 byte or smaller IO’s as they are not included. Note that for all of these unless indicated a 512 byte standard sector or drive format is used as opposed to emerging Advanced Format (AF) 4KB sector or block size. Watch for some more drive and device types to be added to the above, along with results for more workers or thread counts, along with file system and other scenarios.
Using VMware as part of a Server, Storage and IO (aka StorageIO) test platform
The above performance results were generated on Ubuntu 12.04 (since upgraded to 14.04 which was hosted on a VMware vSphere 5.1 (upgraded to 5.5U2) purchased version (you can get the ESXi free version here) with vCenter enabled system. I also have VMware workstation installed on some of my Windows-based laptops for doing preliminary testing of scripts and other activity prior to running them on the larger server-based VMware environment. Other VMware tools include vCenter Converter, vSphere Client and CLI. Note that other guest virtual machines (VMs) were idle during the tests (e.g. other guest VMs were quiet). You may experience different results if you ran Ubuntu native on a physical machine or with different adapters, processors and device configurations among many other variables (that was a disclaimer btw ;) ).
All of the devices (HDD, HHDD, SSD’s including those not shown or published yet) were Raw Device Mapped (RDM) to the Ubuntu VM bypassing VMware file system.
Example of creating an RDM for local SAS or SATA direct attached device. vmkfstools -z /vmfs/devices/disks/naa.600605b0005f125018e923064cc17e7c /vmfs/volumes/dat1/RDM_ST1500Z110S6M5.vmdk The above uses the drives address (find by doing a ls -l /dev/disks via VMware shell command line) to then create a vmdk container stored in a dat. Note that the RDM being created does not actually store data in the .vmdk, it’s there for VMware management operations. |
If you are not familiar with how to create a RDM of a local SAS or SATA device, check out this post to learn how.This is important to note in that while VMware was used as a platform to support the guest operating systems (e.g. Ubuntu or Windows), the real devices are not being mapped through or via VMware virtual drives.
The above shows examples of RDM SAS and SATA devices along with other VMware devices and dats. In the next figure is an example of a workload being run in the test environment.
One of the advantages of using VMware (or other hypervisor) with RDM’s is that I can quickly define via software commands where a device gets attached to different operating systems (e.g. the other aspect of software defined storage). This means that after a test run, I can quickly simply shutdown Ubuntu, remove the RDM device from that guests settings, move the device just tested to a Windows guest if needed and restart those VMs. All of that from where ever I happen to be working from without physically changing things or dealing with multi-boot or cabling issues.
Where To Learn More
View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.
- Can we get a side of context with them IOPS and other storage metrics?
- WHEN AND WHERE TO USE NAND FLASH SSD FOR VIRTUAL SERVERS
- Revisiting RAID storage remains relevant and resources
- NVMe overview and primer – Part I
- Part 1 of HDD for content servers series Trends and Content Application Servers
- Part 2 of HDD for content servers series Content application server decisions and testing plans
- Part 3 of HDD for content servers series Test hardware and software configuration
- Part 4 of HDD for content servers series Large file I/O processing
- Part 5 of HDD for content servers series Small file I/O processing
- Part 6 of HDD for content servers series General I/O processing
- Part 7 of HDD for content servers series How HDD continue to evolve over different generations and wrap up
- As the platters spin, HDD’s for cloud, virtual and traditional storage environments
- How many IOPS can a HDD, HHDD or SSD do?
- Hard Disk Drives (HDD) for Virtual Environments
- Server and Storage I/O performance and benchmarking tools
- Server storage I/O performance benchmark workload scripts Part I and Part II
- How to test your HDD, SSD or all flash array (AFA) storage fundamentals
- What is the best server storage I/O workload benchmark? It depends
- I/O, I/O how well do you know about good or bad server and storage I/Os?
- Big Files Lots of Little File Processing Benchmarking with Vdbench
- Part II – NVMe overview and primer (Different Configurations)
- Part III – NVMe overview and primer (Need for Performance Speed)
- Part IV – NVMe overview and primer (Where and How to use NVMe)
- Part V – NVMe overview and primer (Where to learn more, what this all means)
- PCIe Server I/O Fundamentals
- If NVMe is the answer, what are the questions?
- NVMe Wont Replace Flash By Itself
- Via Computerweekly – NVMe discussion: PCIe card vs U.2 and M.2
- Intel and Micron unveil new 3D XPoint Non Volatie Memory (NVM) for servers and storage
- Part II – Intel and Micron new 3D XPoint server and storage NVM
- Part III – 3D XPoint new server storage memory from Intel and Micron
- Server storage I/O benchmark tools, workload scripts and examples (Part I) and (Part II)
- Data Infrastructure Overview, Its Whats Inside of Data Centers
- All You Need To Know about Remote Office/Branch Office Data Protection Backup (free webinar with registration)
- Software Defined, Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI) resources
- The SSD Place (SSD, NVM, PM, SCM, Flash, NVMe, 3D XPoint, MRAM and related topics)
- The NVMe Place (NVMe related topics, trends, tools, technologies, tip resources)
- Data Protection Diaries (Archive, Backup/Restore, BC, BR, DR, HA, RAID/EC/LRC, Replication, Security)
- Software Defined Data Infrastructure Essentials (CRC Press 2017) including SDDC, Cloud, Container and more
- Various Data Infrastructure related events, webinars and other activities
- www.objectstoragecenter.com and Software Defined, Cloud, Bulk and Object Storage Fundamentals
- Server Storage I/O Network PCIe Fundamentals
Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.
What This All Means
So how many IOPs can a device do?
That depends, however have a look at the above information and results.
Check back from time to time here to see what is new or has been added including more drives, devices and other related themes.
Ok, nuff said, for now.
Gs
Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
How many I/O iops can flash SSD or HDD do?
How many i/o iops can flash ssd or hdd do with vmware?
Updated 2/10/2018
A common question I run across is how many I/O iopsS can flash SSD or HDD storage device or system do or give.
The answer is or should be it depends.
This is the first of a two-part series looking at storage performance, and in context specifically around drive or device (e.g. mediums) characteristics across HDD, HHDD and SSD that can be found in cloud, virtual, and legacy environments. In this first part the focus is around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).
What about cloud, tape summit resources, storage systems or appliance?
Lets leave those for a different discussion at another time.
Getting started
Part of my interest in tools, metrics that matter, measurements, analyst, forecasting ties back to having been a server, storage and IO performance and capacity planning analyst when I worked in IT. Another aspect ties back to also having been a sys admin as well as business applications developer when on the IT customer side of things. This was followed by switching over to the vendor world involved with among other things competitive positioning, customer design configuration, validation, simulation and benchmarking HDD and SSD based solutions (e.g. life before becoming an analyst and advisory consultant).
Btw, if you happen to be interested in learn more about server, storage and IO performance and capacity planning, check out my first book Resilient Storage Networks (Elsevier) that has a bit of information on it. There is also coverage of metrics and planning in my two other books The Green and Virtual Data Center (CRC Press) and Cloud and Virtual Data Storage Networking (CRC Press). I have some copies of Resilient Storage Networks available at a special reader or viewer rate (essentially shipping and handling). If interested drop me a note and can fill you in on the details.
There are many rules of thumb (RUT) when it comes to metrics that matter such as IOPS, some that are older while others may be guess or measured in different ways. However the answer is that it depends on many things ranging from if a standalone hard disk drive (HDD), Hybrid HDD (HHDD), Solid State Device (SSD) or if attached to a storage system, appliance, or RAID adapter card among others.
Taking a step back, the big picture
Various HDD, HHDD and SSD’s
Server, storage and I/O performance and benchmark fundamentals
Even if just looking at a HDD, there are many variables ranging from the rotational speed or Revolutions Per Minute (RPM), interface including 1.5Gb, 3.0Gb, 6Gb or 12Gb SAS or SATA or 4Gb Fibre Channel. If simply using a RUT or number based on RPM can cause issues particular with 2.5 vs. 3.5 or enterprise and desktop. For example, some current generation 10K 2.5 HDD can deliver the same or better performance than an older generation 3.5 15K. Other drive factors (see this link for HDD fundamentals) including physical size such as 3.5 inch or 2.5 inch small form factor (SFF), enterprise or desktop or consumer, amount of drive level cache (DRAM). Space capacity of a drive can also have an impact such as if all or just a portion of a large or small capacity devices is used. Not to mention what the drive is attached to ranging from in internal SAS or SATA drive bay, USB port, or a HBA or RAID adapter card or in a storage system.
HDD fundamentals
How about benchmark and performance for marketing or comparison tricks including delayed, deferred or asynchronous writes vs. synchronous or actually committed data to devices? Lets not forget about short stroking (only using a portion of a drive for better IOP’s) or even long stroking (to get better bandwidth leveraging spiral transfers) among others.
Almost forgot, there are also thick, standard, thin and ultra thin drives in 2.5 and 3.5 inch form factors. What’s the difference? The number of platters and read write heads. Look at the following image showing various thickness 2.5 inch drives that have various numbers of platters to increase space capacity in a given density. Want to take a wild guess as to which one has the most space capacity in a given footprint? Also want to guess which type I use for removable disk based archives along with for onsite disk based backup targets (compliments my offsite cloud backups)?
Thick, thin and ultra thin devices
Beyond physical and configuration items, then there are logical configuration including the type of workload, large or small IOPS, random, sequential, reads, writes or mixed (various random, sequential, read, write, large and small IO). Other considerations include file system or raw device, number of workers or concurrent IO threads, size of the target storage space area to decide impact of any locality of reference or buffering. Some other items include how long the test or workload simulation ran for, was the device new or worn in before use among other items.
Tools and the performance toolbox
Then there are the various tools for generating IO’s or workloads along with recording metrics such as reads, writes, response time and other information. Some examples (mix of free or for fee) include Bonnie, Iometer, Iorate, IOzone, Vdbench, TPC, SPC, Microsoft ESRP, SPEC and netmist, Swifttest, Vmark, DVDstore and PCmark 7 among many others. Some are focused just on the storage system and IO path while others are application specific thus exercising servers, storage and IO paths.
Server, storage and IO performance toolbox
Having used Iometer since the late 90s, it has its place and is popular given its ease of use. Iometer is also long in the tooth and has its limits including not much if any new development, never the less, I have it in the toolbox. I also have Futremark PCmark 7 (full version) which turns out has some interesting abilities to do more than exercise an entire Windows PC. For example PCmark can use a secondary drive for doing IO to.
PCmark can be handy for spinning up with VMware (or other tools) lots of virtual Windows systems pointing to a NAS or other shared storage device doing real world type activity. Something that could be handy for testing or stressing virtual desktop infrastructures (VDI) along with other storage systems, servers and solutions. I also have Vdbench among others tools in the toolbox including Iorate which was used to drive the workloads shown below.
What I look for in a tool are how extensible are the scripting capabilities to define various workloads along with capabilities of the test engine. A nice GUI is handy which makes Iometer popular and yes there are script capabilities with Iometer. That is also where Iometer is long in the tooth compared to some of the newer generation of tools that have more emphasis on extensibility vs. ease of use interfaces. This also assumes knowing what workloads to generate vs. simply kicking off some IOPs using default settings to see what happens.
Another handy tool is for recording what’s going on with a running system including IO’s, reads, writes, bandwidth or transfers, random and sequential among other things. This is where when needed I turn to something like HiMon from HyperIO, if you have not tried it, get in touch with Tom West over at HyperIO and tell him StorageIO sent you to get a demo or trial. HiMon is what I used for doing start, stop and boot among other testing being able to see IO’s at the Windows file system level (or below) including very early in the boot or shutdown phase.
Here is a link to some other things I did awhile back with HiMon to profile some Windows and VDI activity test profiling.
What’s the best tool or benchmark or workload generator?
The one that meets your needs, usually your applications or something as close as possible to it.
Various 2.5 and 3.5 inch HDD, HHDD, SSD with different performance
Where To Learn More
View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.
- Can we get a side of context with them IOPS and other storage metrics?
- WHEN AND WHERE TO USE NAND FLASH SSD FOR VIRTUAL SERVERS
- Revisiting RAID storage remains relevant and resources
- NVMe overview and primer – Part I
- Part 1 of HDD for content servers series Trends and Content Application Servers
- Part 2 of HDD for content servers series Content application server decisions and testing plans
- Part 3 of HDD for content servers series Test hardware and software configuration
- Part 4 of HDD for content servers series Large file I/O processing
- Part 5 of HDD for content servers series Small file I/O processing
- Part 6 of HDD for content servers series General I/O processing
- Part 7 of HDD for content servers series How HDD continue to evolve over different generations and wrap up
- As the platters spin, HDD’s for cloud, virtual and traditional storage environments
- How many IOPS can a HDD, HHDD or SSD do?
- Hard Disk Drives (HDD) for Virtual Environments
- Server and Storage I/O performance and benchmarking tools
- Server storage I/O performance benchmark workload scripts Part I and Part II
- How to test your HDD, SSD or all flash array (AFA) storage fundamentals
- What is the best server storage I/O workload benchmark? It depends
- I/O, I/O how well do you know about good or bad server and storage I/Os?
- Big Files Lots of Little File Processing Benchmarking with Vdbench
- Part II – NVMe overview and primer (Different Configurations)
- Part III – NVMe overview and primer (Need for Performance Speed)
- Part IV – NVMe overview and primer (Where and How to use NVMe)
- Part V – NVMe overview and primer (Where to learn more, what this all means)
- PCIe Server I/O Fundamentals
- If NVMe is the answer, what are the questions?
- NVMe Wont Replace Flash By Itself
- Via Computerweekly – NVMe discussion: PCIe card vs U.2 and M.2
- Intel and Micron unveil new 3D XPoint Non Volatie Memory (NVM) for servers and storage
- Part II – Intel and Micron new 3D XPoint server and storage NVM
- Part III – 3D XPoint new server storage memory from Intel and Micron
- Server storage I/O benchmark tools, workload scripts and examples (Part I) and (Part II)
- Data Infrastructure Overview, Its Whats Inside of Data Centers
- All You Need To Know about Remote Office/Branch Office Data Protection Backup (free webinar with registration)
- Software Defined, Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI) resources
- The SSD Place (SSD, NVM, PM, SCM, Flash, NVMe, 3D XPoint, MRAM and related topics)
- The NVMe Place (NVMe related topics, trends, tools, technologies, tip resources)
- Data Protection Diaries (Archive, Backup/Restore, BC, BR, DR, HA, RAID/EC/LRC, Replication, Security)
- Software Defined Data Infrastructure Essentials (CRC Press 2017) including SDDC, Cloud, Container and more
- Various Data Infrastructure related events, webinars and other activities
- www.objectstoragecenter.com and Software Defined, Cloud, Bulk and Object Storage Fundamentals
- Server Storage I/O Network PCIe Fundamentals
Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.
What This All Means
That depends, however continue reading part II of this series to see some results for various types of drives and workloads.
Ok, nuff said, for now.
Gs
Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
VMware buys virsto, is it about storage hypervisors?
Yesterday VMware announced that it is acquiring the IO performance optimization and acceleration software vendor Virsto for an undisclosed amount.
Some may know Virsto due to their latching and jumping onto the Storage Hypervisor bandwagon as part of storage virtualization and virtual storage. On the other hand, some may know Virsto for their software that plugs into server virtualization Hypervisor such as VMware and Microsoft Hyper-V. Then there are all of those who either did not or still don’t know of Virsto or their solutions yet they need to learn about it.
Unlike virtual storage arrays (VSAa), or virtual storage appliances, or storage virtualization software that aggregates storage, the Virsto software address the IO performance aggravation caused by aggregation.
Keep in mind that the best IO is the IO that you do not have to do. The second best IO is the one that has the least impact and that is cost effective. A common approach, or preached best practice by some vendors server virtualization and virtual desktop infrastructures (VDI) that result in IO bottlenecks is to throw more SSD or HDD hardware at the problem.
Turns out that the problem with virtual machines (VMs) is not just aggregation (consolidation) causing aggravation, it’s also the mess of mixed applications and IO profiles. That is where IO optimization and acceleration tools come into play that are plugged into applications, file systems, operating systems, hypervisor’s or storage appliances.
In the case of Virsto (read more about their solution here), their technology plugs into the hypervisor (e.g. VMware vSphere/ESX or Hyper-V) to group and optimize IO operations.
By using SSD as a persistent cache, tools such as Virsto can help make better use of underlying storage systems including HDD and SSD, while also removing the aggravation as a result of aggregation.
What will be interesting to watch is to see if VMware continues to support other hypervisor’s such as Microsoft Hyper-V or close the technology to VMware only.
It will also be interesting to see how VMware and their parent EMC can leverage Virsto technology to complement virtual SANs as well as VSAs and underlying hardware from VFcache to storage arrays with SSD and SSD appliances as opposed to compete with them.
With the Virsto technology now part of VMware, hopefully there will be less time on talking about storage hypervisor’s and more around server IO optimization and enablement to create broader awareness for the technology.
Congratulations to VMware (and EMC) along with Virsto.
Ok, nuff said.
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Industry trends and perspectives: Chatting with Karl Chen at SNW 2012
This is the second (here is the first SNW 2012 Waynes World) in a series of StorageIO industry trends and perspective audio blog and pod cast about Storage Networking World (SNW) Fall 2012 in Santa Clara California.
Given how at conference conversations tend to occur in the hallways, lobbies and bar areas of venues, what better place to have candid conversations with people from throughout the industry, some you know, some you will get to know better.
In this episode, I’m joined by my co-host Bruce Rave aka Bruce Ravid of Ravid & Associates as we catch up and visit with Chief Marketing Officer (CMO) of Starboard Storage Systems Karl Chen in the Santa Clara Hyatt (event venue) lobby bar area.
Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Karl and Bruce. Our conversations covers SNW, VMworld, Americas Cup Yacht racing, storage technology and networking with people during these events.
Also available via
Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts from SNW and other upcoming events.
Enjoy listening to catching up with Karl Chen from the Fall SNW 2012 pod cast.
Ok, nuff said.
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved
Trick or treat and vendor fun games
In the spirit of Halloween and zombies season, a couple of thoughts come to mind about vendor tricks and treats. This is an industry trends and perspectives post, part of an ongoing series looking at various technology and fun topics.
The first trick or treat game pertains to the blame game; you know either when something breaks, or at the other extreme, before you have even made a decision to buy something. The trick or treat game for decision-making goes something like this.
Vendor “A” says products succeed with their solution while failure results with a solution from “B” when doing “X”. Otoh, vendor “B” claims that “X” will fail when using a solution from vendor “A”. In fact, you can pick what you want to substitute for “X”, perhaps VDI, PCIe, Big Data, Little Data, Backup, Archive, Analytics, Private Cloud, Public Cloud, Hybrid Cloud, eDiscovery you name it.
This is not complicated math or big data problem requiring a high-performance computing (HPC) platform. A HPC Zetta-Flop processing ability using 512 bit addressing of 9.9 (e.g. 1 nine) PettaBytes of battery-backed DRAM and an IO capability of 9.99999 (e.g. 5 9’s) trillion 8 bit IOPS to do table pivots or runge kutta numerical analysis, map reduce, SAS or another modeling with optional iProduct or Android interface are not needed.
StorageIO images of touring Texas Advanced Computing (e.g. HPC) Center
Can you solve this equation? Hint it does not need a PhD or any other advanced degree. Another hint, if you have ever been at any side of the technology product and services decision-making table, regardless of the costume you wore, you should know the answer.
Of course the question of would “X” fail regardless of who or what “A” or “B” let alone a “C”, “D” or “F”? In other words, it is not the solution, technology, vendor or provider, rather the problem or perhaps even lack thereof that is the issue. Or is it a case where there is a solution from “A”, “B” or any others that is looking for a problem, and if it is the wrong problem, there can be a wrong solution thus failure?
Another trick or treat game is vendors public relations (PR) or analyst relations (AR) people to ask for one thing and delivery or ask another. For example, some vendor, service provider, their marketing AR and PR people or surrogates make contact wanting to tell of various success and failure story. Of course, this is usually their success and somebody else’s failure, or their victory over something or someone who sometimes can be interesting. Of course, there are also the treats to get you to listen to the above, such as tempt you with a project if you meet with their subject, which may be a trick of a disappearing treat (e.g. magic, poof it is gone after the discussion).
There are another AR and PR trick and treat where they offer on behalf of their representative organization or client to a perspective or exclusive insight on their competitor. Of course, the treat from their perspective is that they will generously expose all that is wrong with what a competitor is saying about their own (e.g. the competitors) product.
Let me get this straight, I am not supposed to believe what somebody says about his or her own product, however, supposed to believe what a competitor says is wrong with the competition’s product, and what is right with his or her own product.
Hmm, ok, so let me get this straight, a competitor say “A” wants to tell me what somebody say from “B” has told me is wrong and I should schedule a visit with a truth squad member from “A” to get the record set straight about “B”?
Does that mean then that I go to “B” for a rebuttal, as well as an update about “A” from “B”, assuming that what “A” has told me is also false about themselves, and perhaps about “B” or any other?
Too be fair, depending on your level of trust and confidence in either a vendor, their personal or surrogates, you might tend to believe more from them vs. others, or at least until you been tricked after given treats. There may be some that have been tricked, or they tried applying to many treats to present a story that behind the costume might be a bit scary.
Having been through enough of these, and I candidly believe that sometimes “A” or “B” or any other party actually do believe that they have more or better info about their competitor and that they can convince somebody about what their competitor is doing better than the competitor can. I also believe that there are people out there who will go to “A” or “B” and believe what they are told by based on their preference, bias or interests.
When I hear from vendors, VARs, solution or service providers and others, it’s interesting hearing point, counterpoint and so forth, however if time is limited, I’am more interested in hearing from such as “A” about them, what they are doing, where success, where challenges, where going and if applicable, under NDA go into more detail.
Customer success stories are good, however again, if interested in what works, what kind of works, or what does not work, chances are when looking for G2 vs. GQ, a non-scripted customer conversation or perspective of the good, the bad and the ugly is preferred, even if under NDA. Again, if time is limited which it usually is, focus on what is being done with your solution, where it is going and if compelled send follow-up material that can of course include MUD and FUD about others if that is your preference.
Then there is when during a 30 minute briefing, the vendor or solution provider is still talking about trends, customer pain points, what competitors are doing at 21 minutes into the call with no sign of an announcement, update or news in site
Lets not forget about the trick where the vendor marketing or PR person reaches out and says that the CEO, CMO, CTO or some other CxO or Chief Jailable Officer (CJO) wants to talk with you. Part of the trick is when the CxO actually makes it to the briefing and is not ready, does not know why the call is occurring, or, thinks that a request for an audience has been made with them for an interview or something else.
A treat is when 3 to 4 minutes into a briefing, the vendor or solution provider has already framed up what and why they are doing something. This means getting to what they are announcing or planning on doing and getting into a conversation to discuss what they are doing and making good follow-up content and resources available.
Sometimes a treat is when a briefer goes on autopilot nailing their script for 29 of a 30 minute session then use the last-minute to ask if there are any questions. The reason autopilot briefings can be a treat is when they are going over what is in the slide deck, webex, or press release thus affording an opportunity to get caught up on other things while talk at you. Hmm, perhaps need to consider playing some tricks in reward for those kind of treats? ;)
Do not be scared, not everybody is out to trick you with treats, and not all treats have tricks attached to them. Be prepared, figure out who is playing tricks with treats, and who has treats without tricks.
Oh, and as a former IT customer, vendor and analyst, one of my favorites is contact information of my dogs to vendors who require registration on their websites for basic things such as data sheets. Another is supplying contact information of competing vendors sales reps to vendors who also require registration for basic data sheets or what should otherwise be generally available information as opposed to more premium treats. Of course there are many more fun tricks, however lets leave those alone for now.
Note: Zombie voting rules apply which means vote early, vote often, and of course vote for those who cannot include those that are dead (real or virtual).
Where To Learn More
View additiona related material via the following links.
- Can we get a side of context with them IOPS and other storage metrics?
- Revisiting RAID storage remains relevant and resources
- NVMe overview and primer – Part I
- Part 1 of HDD for content servers series Trends and Content Application Servers
- As the platters spin, HDD’s for cloud, virtual and traditional storage environments
- Hard Disk Drives (HDD) for Virtual Environments
- Server and Storage I/O performance and benchmarking tools
- Server storage I/O performance benchmark workload scripts Part I and Part II
- How to test your HDD, SSD or all flash array (AFA) storage fundamentals
- What is the best server storage I/O workload benchmark? It depends
- I/O, I/O how well do you know about good or bad server and storage I/Os?
- Big Files Lots of Little File Processing Benchmarking with Vdbench
- Part II – NVMe overview and primer (Different Configurations)
- PCIe Server I/O Fundamentals
- If NVMe is the answer, what are the questions?
- NVMe Wont Replace Flash By Itself
- Via Computerweekly – NVMe discussion: PCIe card vs U.2 and M.2
- Intel and Micron unveil new 3D XPoint Non Volatie Memory (NVM) for servers and storage
- Data Infrastructure Overview, Its Whats Inside of Data Centers
- Software Defined, Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI) resources
- The SSD Place (SSD, NVM, PM, SCM, Flash, NVMe, 3D XPoint, MRAM and related topics)
- The NVMe Place (NVMe related topics, trends, tools, technologies, tip resources)
- Data Protection Diaries (Archive, Backup/Restore, BC, BR, DR, HA, RAID/EC/LRC, Replication, Security)
- www.objectstoragecenter.com and Software Defined, Cloud, Bulk and Object Storage Fundamentals
Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.
What This All Means
Watch out for tricks and treats, have a safe and fun Zombie (aka Halloween) season. See you while out and about this fall and don’t forget to take part in the ongoing zombie technology poll. Oh, and be safe with trick or treat and vendor fun games
Ok, nuff said, for now.
Gs
Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Cloud, virtualization, storage and networking in an election year
My how time flies, seems like just yesterday (back in 2008) that I did a piece titled Politics and Storage, or, storage in an election year V2.008 and if you are not aware, it is 2012 and thus an election year in the U.S. as well as in many other parts of the world. Being an election year it’s not just about politicians, their supporters, pundits, surrogates, donors and voters, it’s also a technology decision-making and acquisition year (as are most years) for many environments.
Similar to politics, some technology decisions will be major while others will be minor or renewals so to speak. Major decisions will evolve around strategies, architectures, visions, implementation plans and technology selections including products, protocols, processes, people, vendors or suppliers and services for traditional, virtual and cloud data infrastructure environments.
Vendors, suppliers, service providers and their associated industry forums or alliances and trade groups are in various sales and marketing awareness campaigns. These various campaigns will decide who will be chosen by their customers or prospects for technology acquisitions ranging from hardware, software and services including servers, storage, IO and networking, desktops, power, cooling, facilities, management tools, virtualization and cloud products and services along with related items.
The politics of data infrastructures including servers, storage, networking, hardware, software and services spanning physical, cloud and virtual environments has similarities to other political races. These include many organizations in the form of inter departmental rivalry over budgets or funding, service levels, decision-making, turf wars and technology ownership not to mention the usual vendor vs. vendor, VAR vs. VAR, service provider vs. service provider or other match ups.
On the other hand, data and storage are also being used to support political campaigns in many ways across physical, virtual and cloud deployment scenarios.
Let us not forget about the conventions or what are more commonly known as shows, conferences, user group events in the IT world. For example EMCworld earlier this year, Dell Storage Forum, or the recent VMworld (or click here to view video from past VMworld party with INXS), Oracle Open World along with many vendor analyst, partner, press and media or blogger days.
Here are some 2012 politics of data infrastructure and storage campaign match-ups:
- Vendor lock in, is it a problem and who is responsible
- Replication and snapshots vs. Backup vs. data protection modernization
- Erasure codes vs. RAID
- Public vs. Private and hybrid clouds
- Cloud products vs. cloud APIs vs. cloud services
- IBM and the Better Business Bureau vs. Oracle marketing claims
- Cloud confidence vs. cloud data loss vs. loss of access
- Taking shared responsibility for data protection vs. blaming others
- Bring your own device (BYOD) vs. IT supplied
- VDI vs. Physical and traditional desktops including windows performance
- EMC vs NetApp in the race for unified or anything else storage related
- Big iron vs. little iron vs. virtual iron or software defined
- EMC vs. Oracle in the race for big data buzz
- Environmental focused vs. economic and productivity enabling Green IT
- Green IT myths and missed opportunities
- Oracle vs. IBM in the race for big data and little data (databases)
- Clusters, clouds and grids vs. traditional architectures
- Seagate vs. Western Digital (WD) in the race for Hard Disk Drives (HDD)
- Hard vs. soft products and services
- SSD vs. HDD vs. HHDD and SSD startups vs. established vendors
- EMC and Lenovo vs. Dell, HP, IBM, NetApp and others
- Industry adoption vs. industry deployment
- PCIe SSD vendors vs. storage array and appliance vendors
- Nand flash vs. any new SSD entrants for persistent memory
- Consolidate everything vs. virtualize many things
- SAN, NAS or Unified vs. Cloud object vs. DAS vs. SAS vs. FCoE
- Microsoft Hyper-V and Citrix Xen and KVM vs. VMware vSphere
- Microsoft, HP and others vs. Amazon and Goggle for cloud supremacy
- Edgy vs. civility, G2 vs. GQ, entertainment vs. education
- Fear and FUD vs. credibility and confidence
- Samsung vs. Apple lawsuit(s) part deux
- IOV, SDN, and software defined anything vs. hardware defined anything
Speaking of networks vs. server and storage or software and convergence, how about Brocade vs. Cisco, Qlogic vs. Emulex, Broadcom vs. Mellanox, Juniper vs. HP and Dell (Force10) or Arista vs. others in the race for SAN LAN MAN WAN POTS and PANs.
Then there are the claims, counter claims, pundits, media, bloggers, trade groups or lobbyist, marketing alliance or pacs, paid for ads and posts, tweets and videos along with supporting metrics for traditional and social media.
Lets also not forget about polls, and more polls.
Certainly, there are vendors vs. vendors relying on their campaign teams (sales, marketing, engineering, financing and external surrogates) similar to what you would find with a politician, of course scope, size and complexity would vary.
Surrogates include analyst, bloggers, consultants, business partners, community organizers, editors, VARs, influencers, press, public relations and publications among others. Some claim to be objective and free of vendor influence while leveraging simple to complex schemes for renumeration (e.g. getting paid) while others simply state what they are doing and with whom.
Likewise, some point fingers at others who are misbehaving while deflecting away from what they are actually doing. Hmm, sounds like the pundit or surrogate two-step (as opposed to the Potomac two step) and prompts the question of who is checking the fact checkers and making disclosures (disclosure: this piece is being sponsored by StorageIO ;) )?
What this all means?
Use your brain, use your eyes and ears, and use your nose all of which have dual paths to your senses.
In other words, if something sounds or looks too good to be true, it probably isn’t.
Likewise if something smells funny or does not feel right to your senses or common sense, it probably is not or at least requires a closer look or analysis.
Be an informed decision maker balancing needs vs. wants to make effective selections regardless of if for a major or minor item, technology, trend, product, process, protocol or service. Informed decisions also mean looking at both current and evolving or future trends, challenges and needs which for data infrastructures including servers, storage, networking, IO fabrics, cloud and virtualization means factoring in changing data and information life cycles and access or usage patterns. After all, while there are tough economic times on a global basis, there is no such thing as a data or information recession.
This also means gaining insight and awareness of issues and challenges, plus balancing awareness and knowledge (G2) vs. looks, appearances and campaign sales pitches (GQ) for your particular environment, priorities and preferences.
Keep in mind and in the spirit of legendary Chicago style voting, when it comes to storage and data infrastructure topics, technologies and decisions, spend early, spend often and spend for those who cannot to keep the vendors and their ecosystem of partners happy.
Note that this post is neither supported, influenced, endorsed or paid for by any vendors, VARs, service providers, trade groups, political action committees or Picture Archive Communication system (e.g. PACs), both of which deal with and in big data along with industry consortiums, their partners, customers or surrogates and neither would they probably approve of it anyway’s.
With that being said, I am Greg Schulz of StorageIO and am not running for or from anything this year and I do endorse the above post ;).
Ok, nuff said for now
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved
What does new EMC and Lenovo partnership mean?
The past several weeks have been busy with various merger, acquisitions and collaborating activity in the IT and data storage world. Summer time often brings new relationships and even summer marriages. The most recent is EMC and Lenovo announcing a new partnership that includes OEM sourcing of technology, market expansion and other initiatives. Hmm, does anybody remember who EMCs former desktop and server partner was, or who put Lenovo out for adoption several years ago?
Here is the press release from EMC and Lenovo that you can read yourself vs. me simply paraphrasing it:
Lenovo and EMC Team Up In Strategic Worldwide Partnership
A Solid Step in Lenovo’s Aspiration to Be a Player in Industry Standard Servers and Networked Storage with EMC’s Leading Technology; EMC Further Strengthens Ability to Serve Customers’ Storage Solutions Needs in China and Other Emerging Markets; Companies Agree to Form SMB-Focused Storage Joint Venture
BEIJING, China – August 1, 2012
Lenovo (HKSE: 992) (ADR: LNVGY) and EMC Corporation (NYSE: EMC) today announced a broad partnership that enhances Lenovo’s position in industry standard servers and networked storage solutions, while significantly expanding EMC’s reach in China and other key, high-growth markets. The new partnership is expected to spark innovation and additional R&D in the server and storage markets by maximizing the product development talents and resources at both companies, while driving scale and efficiency in the partners’ respective supply chains.
The partnership is a strong strategic fit, leveraging the two leading companies’ respective strengths, across three main areas:
- First, Lenovo and EMC have formed a server technology development program that will accelerate and extend Lenovo’s capabilities in the x86 industry-standard server segment. These servers will be brought to market by Lenovo and embedded into selected EMC storage systems over time.
- Second, the companies have forged an OEM and reseller relationship in which Lenovo will provide EMC’s industry-leading networked storage solutions to its customers, initially in China and expanding into other global markets in step with the ongoing development of its server business.
- Finally, EMC and Lenovo plan to bring certain assets and resources from EMC’s Iomega business into a new joint venture which will provide Network Attached Storage (NAS) systems to small/medium businesses (SMB) and distributed enterprise sites.
“Today’s announcement with industry leader EMC is another solid step in our journey to build on our foundation in PCs and become a leader in the new PC-plus era,” said Yuanqing Yang, Lenovo chairman and CEO. “This partnership will help us fully deliver on our PC-plus strategy by giving us strong back-end capabilities and business foundation in servers and storage, in addition to our already strong position in devices. EMC is the perfect partner to help us fully realize the PC-plus opportunity in the long term.”
Joe Tucci, chairman and CEO of EMC, said, “The relationship with Lenovo represents a powerful opportunity for EMC to significantly expand our presence in China, a vibrant and very important market, and extend it to other parts of the world over time. Lenovo has clearly demonstrated its ability to apply its considerable resources and expertise not only to enter, but to lead major market segments. We’re excited to partner with Lenovo as we focus our combined energies serving a broader range of customers with industry-leading storage and server solutions.”
In the joint venture, Lenovo will contribute cash, while EMC will contribute certain assets and resources of Iomega. Upon closing, Lenovo will hold a majority interest in the new joint venture. During and after the transition from independent operations to the joint venture, customers will experience continuity of service, product delivery and warranty fulfillment. The joint venture is subject to customary closing procedures including regulatory approvals and is expected to close by the end of 2012.
The partnership described here is not considered material to either company’s current fiscal year earnings.
About Lenovo
Lenovo (HKSE: 992) (ADR: LNVGY) is a $US30 billion personal technology company and the world’s second largest PC company, serving customers in more than 160 countries. Dedicated to building exceptionally engineered PCs and mobile internet devices, Lenovo’s business is built on product innovation, a highly efficient global supply chain and strong strategic execution. Formed by Lenovo Group’s acquisition of the former IBM Personal Computing Division, the Company develops, manufactures and markets reliable, high-quality, secure and easy-to-use technology products and services. Its product lines include legendary Think-branded commercial PCs and Idea-branded consumer PCs, as well as servers, workstations, and a family of mobile internet devices, including tablets and smart phones. Lenovo has major research centers in Yamato, Japan; Beijing, Shanghai and Shenzhen, China; and Raleigh, North Carolina. For more information, see www.lenovo.com.
About EMC
EMC Corporation is a global leader in enabling businesses and service providers to transform their operations and deliver IT as a service. Fundamental to this transformation is cloud computing. Through innovative products and services, EMC accelerates the journey to cloud computing, helping IT departments to store, manage, protect and analyze their most valuable asset — information — in a more agile, trusted and cost-efficient way. Additional information about EMC can be found at www.EMC.com.
What is my take?
Disclosures
I have been buying and using Lenovo desktop and laptop products for over a decade and currently typing this post from my X1 ThinkPad equipped with a Samsung SSD. Likewise I bought an Iomega IX4 NAS a couple of years ago (so I am a customer), am a Retrospect customer (EMC bought and then sold them off), used to be a Mozy user (now a former customer) and EMC has been a client of StorageIO in the past.
Some of my Lenovo(s) and EMC Iomega IX4
Let us take a step back for a moment, Lenovo was the spinout and sale from IBM who has a US base in Raleigh North Carolina. While IBM still partners with Lenovo for desktops, IBM over the past years or decade(s) has been more strategically focused on big enterprise environments, software and services. Note that IBM has continued enhancing its own Intel based servers (e.g. xSeries), propriety Power processor series, storage and technology solutions (here, here, here and here among others). However, for the most part, IBM has moved away from catering to the Consumer, SOHO and SMB server, storage, desktop and related technology environments.
EMC on the other hand started out in the data center growing up to challenge IBMs dominance of data storage in big environments to now being the industry maker storage player for big and little data, from enterprise to cloud to desktop to server, consumer to data center. EMC also was partnered with Dell who competes directly with Lenovo until that relationship ended a few years ago. EMC for its part has been on a growth and expansion strategy adding technologies, companies, DNA and ability along with staff in the desktop, server and other spaces from a data, information and storage perspective not to mention VMware (virtualization and cloud), RSA (security) among others such as Mozy for cloud backup. EMC is also using more servers in its solutions ranging from Iomega based NAS to VNX unified storage systems, Greenplum big data to Centera archiving, ATMOS and various data protection solutions among other products.
Note that this is an industry wide trend of leveraging Intel Architecture (IA) along with AMD, Broadcom, and IBM Power among other general-purpose processors and servers as platforms for running storage and data applications or appliances.
Overall, I think that this is a good move for both EMC and Lenovo to expand their reach into different adjacent markets leveraging and complimenting each other strengths.
Ok, lets see who is involved in the next IT summer relationship, nuff said for now.
Cheers Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved
Modernizing data protection with certainty
Speaking of and about modernizing data protection, back in June I was invited to be a keynote presenter on industry trends and perspectives at a series of five dinner events (Boston, Chicago, Palo Alto, Houston and New York City) sponsored by Quantum (that is a disclosure btw).
backup, restore, BC, DR and archiving
The theme of the dinner events was an engaging discussion around modernizing data protection with certainty along with clouds, virtualization and related topics. Quantum and one of their business partner resellers started the event with introductions followed by an interactive discussion by myself, followed by David Chappa (@davidchapa ) who ties the various themes with what Quantum is doing along with some of their customer success stories.
Themes and examples for these events build on my book Cloud and Virtual Data Storage Networking including:
- Rethinking how, when, where and why data is being protected
- Big data, little data and big backup issues and techniques
- Archive, backup modernization, compression, dedupe and storage tiering
- Service level agreements (SLA) and service level objectives (SLO)
- Recovery time objective (RTO) and recovery point objective (RPO)
- Service alignment and balancing needs vs. wants, cost vs. risk
- Protecting virtual, cloud and physical environments
- Stretching your available budget to do more without compromise
- People, processes, products and procedures
Quantum is among other industry leaders with multiple technology and solution offerings for addressing different aspects of data footprint reduction and data protection modernization. These include for physical, virtual and cloud environments along with traditional tape, disk based, compression, dedupe, archive, big data, hardware, software and management tools. A diverse group of attendees have been at the different events including enterprise and SMB, public, private and government across different sectors.
Following are links to some blog posts that covered first series of events along with some of the specific themes and discussion points from different cities:
Via ITKE: The New Realities of Data Protection
Via ITKE: Looking For Certainty In The Cloud
Via ITKE: Success Stories in Data Protection: Cloud virtualization
Via ITKE: Practical Solutions for Data Protection Challenges
Via David Chappas blog
If you missed attending any of the above events, more dates are being added in August and September including stops in Cleveland, Raleigh, Atlanta, Washington DC, San Diego, Connecticut and Philadelphia with more details here.
Ok, nuff said for now, hope to see you at one of the upcoming events.
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved
What is the best kind of IO? The one you do not have to do
What is the best kind of IO? The one you do not have to do
Updated 2/10/2018
What is the best kind of IO? If no IO (input/output) operation is the best IO, than the second best IO is the one that can be done as close to the application and processor with best locality of reference. Then the third best IO is the one that can be done in less time, or at least cost or impact to the requesting application which means moving further down the memory and storage stack (figure 1).
Figure 1 memory and storage hierarchy
The problem with IO is that they are basic operation to get data into and out of a computer or processor so they are required; however, they also have an impact on performance, response or wait time (latency). IO require CPU or processor time and memory to set up and then process the results as well as IO and networking resources to move data to their destination or retrieve from where stored. While IOs cannot be eliminated, their impact can be greatly improved or optimized by doing fewer of them via caching, grouped reads or writes (pre-fetch, write behind) among other techniques and technologies.
Think of it this way, instead of going on multiple errands, sometimes you can group multiple destinations together making for a shorter, more efficient trip; however, that optimization may also take longer. Hence sometimes it makes sense to go on a couple of quick, short low latency trips vs. one single larger one that takes half a day however accomplishes many things. Of course, how far you have to go on those trips (e.g. locality) makes a difference of how many you can do in a given amount of time.
What is locality of reference?
Locality of reference refers to how close (e.g location) data exists for where it is needed (being referenced) for use. For example, the best locality of reference in a computer would be registers in the processor core, then level 1 (L1), level 2 (L2) or level 3 (L3) onboard cache, followed by dynamic random access memory (DRAM). Then would come memory also known as storage on PCIe cards such as nand flash solid state device (SSD) or accessible via an adapter on a direct attached storage (DAS), SAN or NAS device. In the case of a PCIe nand flash SSD card, even though physically the nand flash SSD is closer to the processor, there is still the overhead of traversing the PCIe bus and associated drivers. To help offset that impact, PCIe cards use DRAM as cache or buffers for data along with Meta or control information to further optimize and improve locality of reference. In other words, help with cache hits, cache use and cache effectiveness vs. simply boosting cache utilization.
Where To Learn More
View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.
- Can we get a side of context with them IOPS and other storage metrics?
- WHEN AND WHERE TO USE NAND FLASH SSD FOR VIRTUAL SERVERS
- Revisiting RAID storage remains relevant and resources
- NVMe overview and primer – Part I
- Part 1 of HDD for content servers series Trends and Content Application Servers
- Part 2 of HDD for content servers series Content application server decisions and testing plans
- Part 3 of HDD for content servers series Test hardware and software configuration
- Part 4 of HDD for content servers series Large file I/O processing
- Part 5 of HDD for content servers series Small file I/O processing
- Part 6 of HDD for content servers series General I/O processing
- Part 7 of HDD for content servers series How HDD continue to evolve over different generations and wrap up
- As the platters spin, HDD’s for cloud, virtual and traditional storage environments
- How many IOPS can a HDD, HHDD or SSD do?
- Hard Disk Drives (HDD) for Virtual Environments
- Server and Storage I/O performance and benchmarking tools
- Server storage I/O performance benchmark workload scripts Part I and Part II
- How to test your HDD, SSD or all flash array (AFA) storage fundamentals
- What is the best server storage I/O workload benchmark? It depends
- I/O, I/O how well do you know about good or bad server and storage I/Os?
- Big Files Lots of Little File Processing Benchmarking with Vdbench
- Part II – NVMe overview and primer (Different Configurations)
- Part III – NVMe overview and primer (Need for Performance Speed)
- Part IV – NVMe overview and primer (Where and How to use NVMe)
- Part V – NVMe overview and primer (Where to learn more, what this all means)
- PCIe Server I/O Fundamentals
- If NVMe is the answer, what are the questions?
- NVMe Wont Replace Flash By Itself
- Via Computerweekly – NVMe discussion: PCIe card vs U.2 and M.2
- Intel and Micron unveil new 3D XPoint Non Volatie Memory (NVM) for servers and storage
- Part II – Intel and Micron new 3D XPoint server and storage NVM
- Part III – 3D XPoint new server storage memory from Intel and Micron
- Server storage I/O benchmark tools, workload scripts and examples (Part I) and (Part II)
- Data Infrastructure Overview, Its Whats Inside of Data Centers
- All You Need To Know about Remote Office/Branch Office Data Protection Backup (free webinar with registration)
- Software Defined, Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI) resources
- The SSD Place (SSD, NVM, PM, SCM, Flash, NVMe, 3D XPoint, MRAM and related topics)
- The NVMe Place (NVMe related topics, trends, tools, technologies, tip resources)
- Data Protection Diaries (Archive, Backup/Restore, BC, BR, DR, HA, RAID/EC/LRC, Replication, Security)
- Software Defined Data Infrastructure Essentials (CRC Press 2017) including SDDC, Cloud, Container and more
- Various Data Infrastructure related events, webinars and other activities
- www.objectstoragecenter.com and Software Defined, Cloud, Bulk and Object Storage Fundamentals
- Server Storage I/O Network PCIe Fundamentals
Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.
What This All Means
What can you do the cut the impact of IO
- Establish baseline performance and availability metrics for comparison
- Realize that IOs are a fact of IT virtual, physical and cloud life
- Understand what is a bad IO along with its impact
- Identify why an IO is bad, expensive or causing an impact
- Find and fix the problem, either with software, application or database changes
- Throw more software caching tools, hyper visors or hardware at the problem
- Hardware includes faster processors with more DRAM and fast internal busses
- Leveraging local PCIe flash SSD cards for caching or as targets
- Utilize storage systems or appliances that have intelligent caching and storage optimization capabilities (performance, availability, capacity).
- Compare changes and improvements to baseline, quantify improvement
Ok, nuff said, for now.
Gs
Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.