Server StorageIO Data Infrastructure Insights and Analysis
Category: IT Infrastructure Topics
Items and topics, techniques and technologies pertaining to IT infrastructure and resources including their management across various technology domains areas including servers, storage, network and facilities hardware, software and practices.
Some August 2015 Amazon Web Services (AWS) and Microsoft Azure Cloud Updates
Cloud Services Providers continue to extend their feature, function and capabilities and the following are two examples. Being a customer of both Amazon Web Services (AWS) as well as Microsoft Azure (among others), I receive monthly news updates about service improvements along with new features. Here are a couple of examples involving recent updates from AWS and Azure.
Azure enhancements
Azure Premium Storage generally available in Japan East
Solid State Device (SSD) based Azure Premium Storage is now available in Japan East region. Add up to 32 TB and more than 64,000 IOPs (read operations) per virtual machine with Azure Premium Storage. Learn more about Azure storage and pricing here.
Azure Data Factory generally available
Data Factory is a cloud based data integration service for automated management as well as movement and transformation of data, learn more and view pricing options here.
AWS Partner Updates
Recent Amazon Web Services (AWS) customer update included the following pertaining to partner storage solutions.
Learn more about AWS Partner Network (APN) here or click on the above image.
Primary Cloud File and NAS storage complementing on-premises (e.g. your local) storage
Avere
Ctera
NetApp (Cloud OnTap)
Panzura
SoftNAS
Zadara
Secure File Transfer
Aspera
Signiant
Note that the above are those listed on the AWS Storage Partner Page as of this being published and subject to change. Likewise other solutions that are not part of the AWS partner program may not be listed.
How do primary storage clouds and cloud for backup differ?
What’s most important to know about my cloud privacy policy?
What this all means and wrap up
Cloud Service Providers (CSP) continue to enhance their capabilities, as well as their footprints as part of growth. In addition to technology, tools and number of regions, sites and data centers, the CSPs are also expanding their partner networks both about how many partners, also in the scope of those partnerships. Some of these partnerships are in the scope of the cloud as a destination, others are for enabling hybrid where public clouds become an extension complementing traditional IT. Everything is not the same in most environments and one type of cloud approach does not have to suit or fit all needs, hence the value of hybrid cloud deployment and usage.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Supermicro CSE-M14TQC Use your media bay to add 12 Gbps SAS SSD drives to your server
Do you have a computer server, workstation or mini-tower PC that needs to have more 2.5" form factor hard disk drive (HDD), solid state device (SSD) or hybrid flash drives added yet no expansion space?
Do you also want or need the HDD or SSD drive expansion slots to be hot swappable, 6 Gbps SATA3 along with up to 12 Gbps SAS devices?
Do you have an available 5.25" media bay slot (e.g. where you can add an optional CD or DVD drive) or can you remove your existing CD or DVD drive using USB for software loading?
Do you need to carry out the above without swapping out your existing server or workstation on a reasonable budget, say around $100 USD plus tax, handling, shipping (your prices may vary)?
If you need implement the above, then here is a possible solution, or in my case, an real solution.
Supermicro CSE-M14TQC with hot swap canister before installing in one of my servers
In the past I have used a solution from Startech that supports up to 4 x 2.5" 6 Gbps SAS and SATA drives in a 5.25" media bay form factor installing these in my various HP, Dell and Lenovo servers to increase internal storage bays (slots).
Via Amazon.com StarTech 4 x 2.5" SAS and SATA internal enclosure
I still use the StarTech device shown (read earlier reviews and experiences here, here and here) above in some of my servers which continue to be great for 6Gbps SAS and SATA 2.5" HDDs and SSDs. However for 12 Gbps SAS devices, I have used other approaches including external 12 Gbps SAS enclosures.
Recently while talking with the folks over at Servers Direct, I mentioned how I was using StarTech 4 x 2.5" 6Gbps SAS/SATA media bay enclosure as a means of boosting the number of internal drives that could be put into some smaller servers. The Servers Direct folks told me about the Supermicro CSE-M14TQC which after doing some research, I decided to buy one to complement the StarTech 6Gbps enclosures, as well as external 12 Gbps SAS enclosures or other internal options.
What is the Supermicro CSE-M14TQC?
The CSE-M14TQC is a 5.25" form factor enclosure that enables four (4) 2.5" hot swappable (if your adapter and OS supports hot swap) 12 Gbps SAS or 6 Gbps SATA devices (HDD and SSD) to fit into the media bay slot normally used by CD/DVD devices in servers or workstations. There is a single Molex male power connector on the rear of the enclosure that can be used to attach to your servers available power using applicable connector adapters. In addition there are four seperate drive connectors (e.g. SATA type connectors) that support up to 12 Gbps SAS per drive which you can attach to your servers motherboard (note SAS devices need a SAS controller), HBA or RAID adapters internal ports.
Cooling is provided via a rear mounted 12,500 RPM 16 cubic feet per minute fan, each of the four drives are hot swappable (requires operating system or hypervisor support) contained in a small canister (provided with the enclosure). Drives easily mount to the canister via screws that are also supplied as part of the enclosure kit. There is also a drive activity and failure notification LED for the devices. If you do not have any available SAS or SATA ports on your servers motherboard, you can use an available PCIe slot and add a HBA or RAID card for attaching the CSE-M14TQC to the drives. For example, a 12 Gbps SAS (6 Gbps SATA) Avago/LSI RAID card, or a 6 Gbps SAS/SATA RAID card.
Via Supermicro CSE-M14TQC rear details (4 x SATA and 1 Molex power connector)
CSE-M14TQCrear view before installation
CSE-M14TQC ready for installation with 4 x SATA (12 Gbps SAS) drive connectors and Molex power connector
Tip: In the case of the Lenovo TS140 that I initially installed the CSE-M14TQC into, there is not a lot of space for installing the drive connectors or Molex power connector to the enclosure. Instead, attach the cables to the CSE-M14TQC as shown above before installing the enclosure into the media bay slot. Simply attach the connectors as shown and feed them through the media bay opening as you install the CSE-M14TQC enclosure. Then attach the drive connectors to your HBA, RAID card or server motherboard and the power connector to your power source inside the server.
Note and disclaimer, pay attention to your server manufactures power loading and specification along with how much power will be used by the HDD or SSD’s to be installed to avoid electrical power or fire issues due to overloading!
CSE-M14TQC installed into Lenovo TS140 empty media bay
CSE-M14TQC installed with front face plated installed on Lenovo TS140
If you have a server that simply needs some extra storage capacity by adding some 2.5" HDDs, or boosting performance with fast SSDs yet do not have any more internal drive slots or expansion bays, leverage your media bay. This applies to smaller environments where you might have one or two servers, as well as for environments where you want or need to create a scale out software defined storage or hyper-converged platform using your own hardware. Another option is that if you have a lab or test environment for VMware vSphere ESXi Windows, Linux, Openstack or other things, this can be a cost-effective approach to adding both storage space capacity as well as performance and leveraging newer 12Gbps SAS technologies.
For example, create a VMware VSAN cluster using smaller servers such as Lenovo TS140 or equivalent where you can install a couple of 6TB or 8TB higher capacity 3.5" drive in the internal drive bays, then adding a couple of 12 Gbps SAS SSDs along with a couple of 2.5" 2TB (or larger) HDDs along with a RAID card, and high-speed networking card. If VMware VSAN is not your thing, how about setting up a Windows Server 2012 R2 failover cluster including Scale Out File Server (SOFS) with Hyper-V, or perhaps OpenStack or one of many other virtual storage appliances (VSA) or software defined storage, networking or other solutions. Perhaps you need to deploy more storage for a big data Hadoop based analytics system, or cloud or object storage solution? On the other hand, if you simply need to add some storage to your storage or media or gaming server or general purpose server, the CSE-M14TQC can be an option along with other external solutions.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Breaking the VMware ESXi 5.5 ACPI boot loop on Lenovo TD350
Do you have a Lenovo TD350 or for that many other servers that when trying to load or run VMware vSphere ESXi 5.5 u2 (or other versions) and run into the boot loop at the “Initializing ACPI” point?
VMware ACPI boot loop
The symptoms are that you see ESXi start its boot process, loading drivers and modules (e.g. black screen), then you see the Yellow boot screen with Timer and Scheduler initialized, and at the “Initializing ACPI” point, ka boom, either a boot loop starts (e.g. the above processes repeats after system boots).
The fix is actually pretty quick and simple, finding it took a bit of time, trial and error.
There were of course the usual suspects such as
Checking to BIOS and firmware version of the motherboard on the Lenovo TD350 (checked this, however did not upgrade)
Making sure that the proper VMware ESXi patches and updates were installed (they were, this was a pre built image from another working server)
Having the latest installation media if this was a new install (tried this as part of trouble shooting to make sure the pre built image was ok)
Remove any conflicting devices (small diversion hint: make sure if you have cloned a working VMware image to an internal drive that it is removed to avoid same file system UUID errors)
Boot into BIOS making sure that for processor VT is enabled, for SATA that AHCI is enabled for any drives as opposed to IDE or RAID, and that for boot, make sure set to Legacy vs. Auto (e.g. disable UEFI support) as well as verify boot order. Having been in auto mode for UEFI support for some other activity, this was easy to change, however was not the magic silver bullet I was looking for.
Breaking the VMware ACPI boot loop on Lenovo TD350
After doing some searching and coming up with some interesting and false leads, as well as trying several boots, BIOS configuration changes, even cloning the good VMware ESXi boot image to an internal drive if there was a USB boot issue, the solution was rather simple once found (or remembered).
Lenovo TD350 BIOS basic settings
Lenovo TD350 processor settings
Make sure that in your BIOS setup under PCIE that you have that you disable “Above 4GB decoding".
Turns out that I had enabled "Above 4GB decoding" for some other things I had done.
Lenovo TD350 disabling above 4GB decoding on PCIE under advanced settings
Once I made the above change, press F10 to save BIOS settings and boot, VMware ESXi had no issues getting past the ACPI initializing and the boot loop was broken.
Lenovo ThinkServer TD340 Server and StorageIO lab Review
Part II: Lenovo TS140 Server and Storage I/O lab Review
Software defined storage on a budget with Lenovo TS140
What this all means and wrap up
In this day and age of software defined focus, remember to double-check how your hardware BIOS (e.g. software) is defined for supporting various software defined server, storage, I/O and networking software for cloud, virtual, container and legacy environments. Watch for future posts with my experiences using the Lenovo TD350 including with Windows 2012 R2 (bare metal and virtual), Ubuntu (bare metal and virtual) with various application workloads among other things.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Intel Micron 3D XPoint server storage NVM SCM PM SSD.
This is the second of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part III here.
Is this 3D XPoint marketing, manufacturing or material technology?
You can’t have a successful manufactured material technology without some marketing, likewise marketing without some manufactured material would be manufactured marketing. In the case of 3D XPoint and its announcement launch, their real technology shown, granted it was only wafer and dies as opposed to an actual DDR4 DIMM or PCIe Add In Card (AIC) or drive form factor Solid State Device (SSD) product. On the other hand, on a relative comparison basis, even though there is marketing collateral available to learn more from, this was far from a over the big-top made for TV or web circus event, which can be a good thing.
Wafer unveiled containing 3D XPoint 128 Gb dies
Who will get access to 3D XPoint?
Initially 3D XPoint production capacity supply will be for the two companies to offer early samples to their customers later this year with general production slated for 2016 meaning early real customer deployed products starting sometime in 2016.
Is it NAND or NOT?
3D XPoint is not NAND flash, it is also not NVRAM or DRAM, it’s a new class of NVM that can be used for server class main memory with persistency, or as persistent data storage among other uses (cell phones, automobiles, appliances and other electronics). In addition, 3D XPoint is more durable with a longer useful life for writing and storing data vs. NAND flash.
Why is 3D XPoint important?
As mentioned during the Intel and Micron announcement, there have only been seven major memory technologies introduced since the transistor back in 1947, granted there have been many variations along with generational enhancements of those. Thus 3D XPoint is being positioned by Intel and Micron as the eighth memory class joining its predecessors many of which continue to be used today in various roles.
Major memory classes or categories timeline
In addition to the above memory classes or categories timeline, the following shows in more detail various memory categories (click on the image below to get access to the Intel interactive infographic).
Initially the 3D XPoint technology is available in a 2 layer 128 bit (cell) per die capacity. Keep in mind that there are usually 8 bits to a byte resulting in 16 GByte capacity per chip initially. With density improvements, as well as increased stacking of layers, the number of cells or bits per die (e.g. what makes up a chip) should improve, as well as most implementations will have multiple chips in some type of configuration.
What will 3D XPoint cost?
During the 3D XPoint launch webinar Intel and Micron hinted that first pricing will be between current DRAM and NAND flash on a per cell or bit basis, however real pricing and costs will vary depending on how packaged for use. For example if placed on a DDR4 or different type of DIMM or on a PCIe Add In Card (AIC) or as a drive form factor SSD among other options will vary the real price. Likewise as with other memories and storage mediums, as production yields and volumes increase, along with denser designs, the cost per usable cell or bit can be expected to further improve.
Where to read, watch and learn more
Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) ( Part I)
Part II – Intel and Micron new 3D XPoint server and storage NVM
Part III – 3D XPoint new server storage memory from Intel and Micron
Intel and Micron ( Media Room, links, videos, images and more including B roll videos)
DRAM which has been around for sometime has plenty of life left for many applications as does NAND flash including new 3D NAND, vNAND and other variations. For the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies including 3D XPoint. Read more in this series including Part I here and Part III here.
Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Part III – 3D XPoint server storage class memory SCM
Updated 1/31/2018
3D XPoint nvm pm scm storage class memory.
This is the third of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part II here.
What is 3D XPoint and how does it work?
3D XPoint is a new class or class of memory (view other categories of memory here) that provides performance for reads and writes closer to that of DRAM with about 10x the capacity density. In addition to the speed closer to DRAM vs. the lower NAND flash, 3D XPoint is also non-volatile memory (NVM) like NAND flash, NVRAM and others. What this means is that 3D XPoint can be used as persistent higher density fast server memory (or main memory for other computers and electronics). Besides being fast persistent main memory, 3D XPoint will also be a faster medium for solid state devices (SSD’s) including PCIe Add In Cards (AIC), m2 cards and drive form factor 8637/8639 NVM Express (NVMe) accessed devices that also has better endurance or life span compared to NAND flash.
3D XPoint architecture and attributes
The initial die or basic chip building block 3D XPoint implementation is a layer 128 Gbit device which if using 8 bits would yield 16GB raw. Over time increased densities should become available as the bit density improves with more cells and further scaling of the technology, combined with packaging. For example while a current die could hold up to 16 GBytes of data, multiple dies could be packaged together to create a 32GB, 64GB, 128GB etc. or larger actual product. Think about not only where packaged flash based SSD capacities are today, also think in terms of where DDR3 and DDR4 DIMM are at such as 4GB, 8GB, 16GB, 32GB densities.
The 3D aspect comes from the memory being in a matrix initially being two layers high, with multiple rows and columns that intersect, where those intersections occur is a microscopic material based switch for accessing a particular memory cell. Unlike NAND flash where an individual cell or bit is accessed as part of a larger block or page comprising several thousand bytes at once, 3D XPoint cells or bits can be individually accessed to speed up reads and writes in a more granular fashion. It is this more granular access along with performance that will enable 3D XPoint to be used in lower latency scenarios where DRAM would normally be used.
Instead of trapping electrons in a cell to create a bit of capacity (e.g. on or off) like NAND flash, 3D XPoint leverages the underlying physical material propertied to store a bit as a phase change enabling use of all cells. In other words, instead of being electron based, it is material based. While Intel and Micron did not specify what the actual chemistry and physical materials that are used in 3D XPoint, they did discuss some of the characteristics. If you want to go deep, check out how the Dailytech makes an interesting educated speculation or thesis on the underlying technology.
Watch the following video to get a better idea and visually see how 3D XPoint works.
Left many dies on a wafer, right, a closer look at the dies cut from the wafer
Dies (here and here) are the basic building block of what goes into the chips that in turn are the components used for creating DDR DIMM for main computer memory, as well as for create SD and MicroSD cards, USB thumb drives, PCIe AIC and drive form factor SSD, as well as custom modules on motherboards, or consumption via bare die and wafer level consumption (e.g. where you are doing really custom things at volume, beyond using a soldering iron scale).
Has Intel and Micron cornered the NVM and memory market?
We have heard proclamations, speculation and statements of the demise of DRAM, NAND flash and other volatile and NVM memories for years, if not decades now. Each year there is the usual this will be the year of “x” where “x” can include among others. Resistive RAM aka ReRAM or RRAM aka the memristor that HP earlier announced they were going to bring to market and then earlier this year canceling those plans while Crossbar continues to pursue RRAM. MRAM or Magnetorestive RAM, Phase Change Memory aka CRAM or PCM and PRAM, FRAM aka FeRAM or Ferroelectric RAM among others.
Expanding persistent memory and SSD storage markets
Keep in mind that there are many steps taking time measured in years or decades to go from research and development lab idea to prototype that can then be produced at production volumes in economic yields. As a reference for, there is still plenty of life in both DRAM as well as NAND flash, the later having appeared around 1989.
Technology industry adoption precedes customer adoption and deployment
There is a difference between industry adoption and deployment vs. customer adoption and deployment, they are related, yet separated by time as shown in the above figure. What this means is that there can be several years from the time a new technology is initially introduced and when it becomes generally available. Keep in mind that NAND flash has yet to reach its full market potential despite having made significant inroads the past few years since it was introduced in 1989.
This begs the question of if 3D XPoint is a variation of phase change, RRAM, MRAM or something else. Over at the Dailytech they lay out a line of thinking (or educated speculation) that 3D XPoint is some derivative or variation of phase change, time will tell about what it really is.
What’s the difference between 3D NAND flash and 3D XPoint?
3D NAND is a form of NAND flash NVM, while 3D XPoint is a completely new and different type of NVM (e.g. its not NAND).
3D NAND is a variation of traditional flash with the difference between vertical stacking vs. horizontal to improve density, also known as vertical NAND or V-NAND. Vertical stacking is like building up to house more tenants or occupants in a dense environment or scaling up, vs scaling-out by using up more space where density is not an issue. Note that magnetic HDD’s shifted to perpendicular (e.g. vertical) recording about ten years ago to break through the super parametric barrier and more recently, magnetic tape has also adopted perpendicular recording. Also keep in mind that 3D XPoint and the earlier announced Intel and Micron 3D NAND flash are two separate classes of memory that both just happen to have 3D in their marketing names.
Where to read, watch and learn more
Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage ( Part I)
Part II – Intel and Micron new 3D XPoint server and storage NVM
Part III – 3D XPoint new server storage memory from Intel and Micron
Intel and Micron ( Media Room, links, videos, images and more including B roll videos)
First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Same with DRAM which has been around for sometime, it too still has plenty of life left for many applications. However other applications that have the need for improved speed over NAND flash, or persistency and density vs. DRAM will be some of the first to leverage new NVM technologies such as 3D XPoint. Thus at least for the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies. Bottom line, 3D XPoint is a new class of NVM memory, can be used for persistent main server memory or for persistent fast storage memory. If you have not done so, check out Part I here and Part II here of this three-part series on Intel and Micron 3D XPoint.
Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
3D XPoint NVM persistent memory PM storage class memory SCM
Updated 1/31/2018
This is the first of a three-part series on Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage announcement. Read Part II here and Part III here.
In a webcast the other day, Intel and Micron announced new 3D XPoint non-volatile memory (NVM) that can be used for both primary main memory (e.g. what’s in computers, serves, laptops, tablets and many other things) in place of Dynamic Random Access Memory (DRAM), for persistent storage faster than today’s NAND flash-based solid state devices (SSD), not to mention future hybrid usage scenarios. Note that this announcement while having the common term 3D in it is different from the earlier Intel and Micron announcement about 3D NAND flash (read more about that here).
Data needs to be close to processing, processing needs to be close to the data (locality of reference)
Server Storage I/O memory hardware and software hierarchy along with technology tiers
What did Intel and Micron announce?
Intel SVP and General Manager Non-Volatile Memory solutions group Robert Crooke (Left) and Micron CEO D. Mark Durcan did the joint announcement presentation of 3D XPoint (webinar here). What was announced is the 3D XPoint technology jointly developed and manufactured by Intel and Micron which is a new form or category of NVM that can be used for both primary memory in servers, laptops, other computers among other uses, as well as for persistent data storage.
Robert Crooke (Left) and Mark Durcan (Right)
Summary of 3D XPoint announcement
New category of NVM memory for servers and storage
Joint development and manufacturing by Intel and Micron in Utah
Non volatile so can be used for storage or persistent server main memory
Allows NVM to scale with data, storage and processors performance
Leverages capabilities of both Intel and Micron who have collaborated in the past
Performance Intel and Micron claim up to 1000x faster vs. NAND flash
Availability persistent NVM compared to DRAM with better durability (life span) vs. NAND flash
Capacity densities about 10x better vs. traditional DRAM
Economics cost per bit between dram and nand (depending on packaging of resulting products)
What applications and products is 3D XPoint suited for?
In general, 3D XPoint should be able to be used for many of the same applications and associated products that current DRAM and NAND flash-based storage memories are used for. These range from IT and cloud or managed service provider data centers based applications and services, as well as consumer focused among many others.
3D XPoint enabling various applications
In general, applications or usage scenarios along with supporting products that can benefit from 3D XPoint include among others’. Applications that need larger amounts of main memory in a denser footprint such as in-memory databases, little and big data analytics, gaming, wave form analysis for security, copyright or other detection analysis, life sciences, high performance compute and high-productivity compute, energy, video and content severing among many others.
In addition, applications that need persistent main memory for resiliency, or to cut delays and impacts for planned or un-planned maintenance or having to wait for memories and caches to be warmed or re-populated after a server boot (or re-boot). 3D XPoint will also be useful for those applications that need faster read and write performance compared to current generations NAND flash for data storage. This means both existing and emerging applications as well as some that do not yet exist will benefit from 3D XPoint over time, like how today’s applications and others have benefited from DRAM used in Dual Inline Memory Module (DIMM) and NAND flash advances over the past several decades.
Where to read, watch and learn more
Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) ( Part I)
Part II – Intel and Micron new 3D XPoint server and storage NVM
Part III – 3D XPoint new server storage memory from Intel and Micron
Intel and Micron (Media Room, links, videos, images and more including B roll videos)
First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Continue reading Part II here and Part III here of this three-part series on Intel and Micron 3D XPoint along with more analysis and commentary.
Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Hello and welcome to this July 2015 Server StorageIO update newsletter. Its mid summer here in the northern hemisphere which for many means vacations or holidays.
Content Solution Platforms
Thus this months newsletter has a focus on content solution platforms including hardware and software that get defined to support various applications. Content solutions span from video (4K, HD and legacy streaming, pre-/post-production and editing), audio, imaging (photo, seismic, energy, healthcare, etc.) to security surveillance (including Intelligent Video Surveillance [ISV] as well as Intelligence Surveillance and Reconnaissance [ISR]).
An industry and customer trend is leveraging converged platforms based on multi-socket processors with dozens of cores and threads (logical processors) to support parallel or high-concurrent threaded content based applications.
Recently I had the opportunity by Servers Direct to get some hands-on test time with one of their 2U Content Solution platforms. In addition to big fast data, other content solution applications include: content distribution network (CDN) content caching, network function virtualization (NFV), software-defined network (SDN), cloud rich unstructured big fast media data, analytics and little data (e.g. SQL and NoSQL database, key-value stores, repositories and meta-data) among others.
View other Server StorageIO lab review reports here
Closing Comments
Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.
Cheers gs
Greg Schulz – @StorageIO
Microsoft MVP File System Storage VMware vExpert
In This Issue
Industry Trends News
Commentary in the news
Tips and Articles
StorageIOblog posts
Server StorageIO Lab reviews
Events and Webinars
Resources and Links
StorageIO Commentary in the news
Recent Server StorageIO commentary and industry trends perspectives about news, activities and announcements.
Processor: A Look At Object-Based Storage Processor: Newest and best server trends PowerMore: Flash not just for performance SearchVirtualStorage: Containers and storage BizTechMagazine: Simplify with virtualization EnterpriseStorageForum: Future DR Storage EnterpriseStorageForum: 10 Tips for DRaaS EnterpriseStorageForum: NVMe planning
A common question I am asked is, “What is the best storage technology?” My routine answer is, “It depends!” During my recent Interop Las Vegas session “Smart Shopping for Your Storage Strategy” I addressed this very question. Read more in my tip Selecting Storage: Start With Requirements over at Network Computing.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(TM) and UnlimitedIO All Rights Reserved
Hello and welcome to this joint May and June 2015 Server StorageIO update newsletter. Here in the northern hemisphere its summer which means holiday vacations among other things.
There has been a lot going on this spring and so far this summer with more in the wings. Summer can also be a time to get caught up on some things, preparing for others while hopefully being able to enjoy some time off as well.
In terms of what have I been working on (or with)? Clouds (OpenStack, vCloud Air, AWS, Azure, GCS among others), virtual and containers, flash SSD devices (drives, cards), software defining, content servers, NVMe, databases, data protection items, servers, cache and micro-tiering among other things.
Speaking of getting caught up, back in early May among many other conferences (Cisco, Docker, HP, IBM, OpenStack, Red Hat and many other events) was EMCworld. EMC covered my hotel and registration costs to attend the event in Las Vegas (thanks EMC, that’s a disclosure btw ;). View a summary StorageIOblog post covering EMCworld 2015 here along with recent EMC announcements including Acquisition of cloud services vendor Virtustream for $1.2B, and ECS 2.0.
Server and Storage I/O Wrappings
This months newsletter has a focus on software and storage wrappings, that is, how your storage or software is packaged, delivered or deployed. For example traditional physical storage systems, software defined storage as shrink-wrap or download, tin-wrapped software as an appliance, virtual wrapped such as a virtual storage appliance or cloud wrapped among others.
OpenStack software defined cloud
OpenStack (both the organization, community, event and software) continue to gain momentum. The latest release known as Kilo (more Kilo info here) was released in early April followed by the OpenStack summit in May.
Some of you might be more involved with OpenStack vs. others, perhaps having already deployed into your production environment. Perhaps you, like myself have OpenStack running in a lab for proof of concept, research, development or learning among other things.
You might even be using the services of a public cloud or managed service provider that is powered by OpenStack. On the other hand, you might be familiar with OpenStack from reading up on it, watching videos, listening to podcast’s or attending events to figure out what it is, where it fits, as well as what can your organization use it for.
Drew Robb (@Robbdrew) has a good overview piece about OpenStack and storage over at Enterprise Storage Forum (here). OpenStack is a collection of tools or bundles for building private, hybrid and public clouds. These various open source projects within the OpenStack umbrella include compute (Nova) and virtual machine images (Glance). Other components include dashboard management (Horizon), security and identity control (Keystone), network (Neutron), object storage (Swift), block storage (Cinder) and file-based storage (Manila) among others.
It’s up to the user to decide which pieces you will add. For example, you can use Swift without having virtual machines and vice versa. Read Drew’s complete article here.
This is part of an ongoing series of posts that part of www.storageioblog.com/data-protection-diaries-main/ on data protection including archiving, backup/restore, business continuance (BC), business resiliency (BC), data footprint reduction (DFR), disaster recovery (DR), High Availability (HA) along with related themes, tools, technologies, techniques, trends and strategies.
Data protection is a broad topic that spans from logical and physical security to HA, BC, BR, DR, archiving(including life beyond compliance) along with various tools, technologies, techniques. Key is aligning those to the needs of the business or organization for today’s as well as tomorrows requirements. Instead of doing things what has been done in the past that may have been based on what was known or possible due to technology capabilities, why not start using new and old things in new ways.
Let’s start using all the tools in the data protection toolbox regardless of if they are new or old, cloud, virtual, physical, software defined product or service in new ways while keeping the requirements of the business in focus. Read more from this post here.
Recent Server StorageIO commentary and industry trends perspectives about news, activities and announcements.
BizTechMagazine: Comments on how to simplify your data center with virtualization EnterpriseStorageForum: Comments on Open Stack and Clouds EnterpriseStorageForum: Comments on Top Ten Software Defined Storage Tips, Gotchas and Cautions EdTech: Comments on Harness Power with New Processors Processor: Comments on Protecting Your Servers & Networking equipment EdTech: Comments on Harness Power with New Processors
Processor: Comments on Improve Remote Server Management including KVM CyberTrend: Comments on Software Defined Data Center and virtualization BizTechMagazine: Businesses Prepare as End-of-Life for Windows Server 2003 Nears InformationWeek: Top 10 sessions from Interop Las Vegas 2015 CyberTrend: Comments on Software Defined Data Center and Virtualization
This is a new section starting in this issue where various new or existing vendors as well as service providers you may not have heard about will be listed.
CloudHQ – Cloud management tools EMCcode Rex-Ray – Container management Enmotus FUZE – Flash leveraged micro tiering Rubrik – Data protection management Sureline – Data protection management Virtunet systems – VMware flash cache software InfiniteIO – Cloud and NAS cache appliance Servers Direct – Server and storage platforms
Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here. There are over 1,000 entries (and growing) vendors on the links page.
StorageIO Tips and Articles
So you have a new storage device or system. How will you test or find its performance? Check out this quick-read tip on storage benchmark and testing fundamentals over at BizTech.
BrightTalk Webinar – June 23 2015 9AM PT Server Storage I/O Innovation v2.015: Protect Preserve & Serve Your Information
From StorageIO Labs
Research, Reviews and Reports
VMware vCloud Air Test Drive
local and distributed NAS (NFS, CIFS, DFS) file data. Read more here.
VMware vCloud Air provides a platform similar to those just mentioned among others for your applications and their underlying resource needs (compute, memory, storage, networking) to be fulfilled. In addition, it should not be a surprise that VMware vCloud Air shares many common themes, philosophies and user experiences with the traditional on-premises based VMware solutions you might be familiar with.
Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
EMCworld 2015 How Do You Want Your Storage Wrapped?
Back in early May I was invited by EMC to attend EMCworld 2015 which included both the public sessions, as well as several NDA based discussions. Keep in mind that there is the known, there is the unknown (or assumed or speculated) and in between there are NDA’s, nuff said on that. EMC covered my hotel and registration costs to attend the event in Las Vegas (thanks EMC, that’s a disclosure btw ;) and here is a synopsis of various EMCworld 2015 announcements.
What EMC announced
VMAX3 enhancements to the EMC enterprise flagship storage platform to keep it relevant for traditional legacy workloads as well as for in a converged, scale-out, cloud, virtual and software defined environment.
VNX 3200 entry-level All Flash Array (AFA) flash SSD system starting at $25,000 USD for a 3TB unified platform with full data services found in other VNX products.
vVNX aka Virtual VNX aka "project liberty" which is a community (e.g. free) software version of the VNX. vVNX is a Virtual Storage Appliance (VSA) that you download and run on a VMware platform. Learn more and download here. Note the install will do a CPU type check so forget about trying to run it on a Intel Nuc or similar, I tried just because I could, the install will protect you from doing such things.
Various data protection related items including new Datadomain platforms as well as software updates and integration with other EMC platforms (storage systems).
All Flash Array (AFA) XtremIO 4.0 enhancements including larger clusters, larger nodes to boost performance, capacity and availability, along with copy service updates among others improvements.
Preview of DSSD shared (inside a rack) external flash Solid State Device (SSD) including more details. While much of DSSD is still under NDA, EMC did provide more public details at EMCworld. Between what was displayed and announced publicly at EMCworld as well as what can be found via Google (or other searches) you can piece together more of the DSSD story. What is known publicly today is that DSSD leverages the new Non-Volatile Memory express (NVMe) access protocol built upon underlying PCIe technology. More on DSSD in future discussions,if you have not done so, get an NDA deep dive briefing on it from EMC.
ScaleIO is now available via a free download here including both Windows and Linux clients as well as instructions for those operating systems as well as VMware.
ViPR can also be downloaded here for free (has been previously available) from here as well as it has been placed into open source by EMC.
What EMC announced since EMCworld 2015
Acquisition of cloud services (and software tools) vendor Virtustream for $1.2B adding to the federation cloud services portfolio (companion to VMware vCloud Air).
Release of ECS 2.0 including a free download here. This new version of ECS (Elastic Cloud Storage) can be used independent of the ViPR controller, or in conjunction with ViPR. In addition ECS now has about 80% of the functionality of the Centera object storage platform. The remaining 20% functionality (mainly regulatory compliance governance) of Centera will be added to ECS in the future providing a migration path for Centera customers. In case you are wondering what does EMC do with Centera, Atmos, ViPR and now ECS, answer is that ECS can work with or without ViPR, second is that the functionality of Centera, Atmos are being rolled into ECS. ECS as a refresher is software that transforms general purpose industry standard servers with direct storage into a scale-out HDFS and object storage solution.
Check out EMCcode including S3motion that I use and have reviewed here. Also check out EMCcode Rex-Ray which if you are into docker containers, it should be of interest, I know I’m interested in it.
What this all means and wrap-up
There were no single major explosive announcements however the sum of all the announcements together should not be over shadowed by the big tent made for TV (or web) big tent productions and entertainment. What EMC announced was effectively how would you like, how do you want and need your storage and associated data services along with management wrapped.
By being wrapped, do you want your software defined storage management and storage wrapped in a legacy turnkey solution such as VMAX3, VNX or Isilon, do you want or need it to be hybrid or all flash, converged and unified, block, file or object.
Or do you need or want the software defined storage management and storage to be "shrink wrapped" as a download so you can deploy on your own hardware "tin wrapped" or as a VSA "virtual wrapped" or cloud wrapped? Do you need or want the software defined storage management and storage to leverage anybody’s hardware while being open source?
How do you need or want your storage to be wrapped to fit your specific needs, that IMHO was the essence of what EMC announced at EMCworld 2015, granted the motorcycles and other production entertainment was engaging as well as educational.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
VMware vCloud Air Server StorageIOlab Test Drive with videos
Recently I was invited by VMware vCloud Air to do a free hands-on test drive of their actual production environment. Some of you may already being using VMware vSphere, vRealize and other software defined data center (SDDC) aka Virtual Server Infrastructure (VSI) or Virtual Desktop Infrastructure (VDI) tools among others. Likewise some of you may already be using one of the many cloud compute or Infrastructure as a Service (IaaS) such as Amazon Web Services (AWS) Elastic Cloud Compute (EC2), Centurylink, Google Cloud, IBM Softlayer, Microsoft Azure, Rackspace or Virtustream (being bought by EMC) among many others.
VMware vCloud Air provides a platform similar to those just mentioned among others for your applications and their underlying resource needs (compute, memory, storage, networking) to be fulfilled. In addition, it should not be a surprise that VMware vCloud Air shares many common themes, philosophies and user experiences with the traditional on-premises based VMware solutions you may be familiar with.
You can give VMware vCloud Air a trial for free while the offer lasts by clicking here (service details here). Basically if you click on the link and register a new account for using VMware vCloud Air they will give you up to $500 USD in service credits to use in the real production environment while the offer lasts which iirc is through end of June 2015.
Click on above image to view video part I
Click on above image to view video part II
What this means is that you can go and setup some servers with as many CPUs or cores, memory, Hard Disk Drive (HDD) or flash Solid State Devices (SSD) storage, external IP networks using various operating systems (Centos, Ubuntu, Windows 2008, 20012, 20012 R2) for free, or until you use up the service credits.
Speaking of which, let me give you a bit of a tip or hint, even though you can get free time, if you provision a fast server with lots of fast SSD storage and leave it sit idle over night or over a weekend, you will chew up your free credits rather fast. So the tip which should be common sense is if you are going to do some proof of concepts and then leave things alone for a while, power the virtual cloud servers off to stretch your credits further. On the other hand, if you have something that you want to run on a fast server with fast storage over a weekend or longer, give that a try, just pay attention to your resource usage and possible charges should you exhaust your service credits.
My Server StorageIO test drive mission objective
For my test drive, I created a new account by using the above link to get the service credits. Note that you can use your regular VMware account with vCloud Air, however you wont get the free service credits. So while it is a few minutes of extra work, the benefit was worth it vs. simply using my existing VMware account and racking up more cloud services charges on my credit card. As part of this Server StorageIOlab test drive, I created two companion videos part I here and part II here that you can view to follow along and get a better idea of how vCloud works.
Phase one, create the virtual data center, database server, client servers and first setup
My goal was to set up a simple Virtual Data Center (VDC) that would consist of five Windows 2012 R2 servers, one would be a MySQL database server with the other four being client application servers. You can download MySQL from here at Oracle as well as via other sources. For applications to simplify things I used Hammerdb as well as Benchmark Factory that is part of the Quest Toad tool set for database admins. You can download a free trial copy of Benchmark Factory here, and HammerDB here. Another tool that I used for monitoring the servers is Spotlight on Windows (SoW) which is also free here. Speaking of tools, here is a link to various server and storage I/O performance as well as monitoring tools.
Links to tools that I used for this test-drive included:
Recap of what was done in phase one, watch the associated video here.
After the initial setup (e.g. part I video here), the next step was to add some more virtual machines and take a closer look at the environment. Note that most of the work in setting up this environment was Windows, MySQL, Hammerdb, Benchmark Factory, Spotlight on Windows along with other common tools so their installation is not a focus in these videos or this post, perhaps a future post will dig into those in more depth.
What was done during phase II (view the video here)
There is much more to VMware vCloud Air and on their main site there are many useful links including overviews, how-too tutorials, product and service offering details and much more here. Besides paying attention to your resource usage and avoid being surprised by service charges, two other tips I can pass along that are also mentioned in the videos (here and here) is to pay attention what region you setup your virtual data centers in, second is have your network thought out ahead of time to streamline setting up the NAT and firewall as well as gateway configurations.
Where to learn more
Learn more about data protection and related topics, themes, trends, tools and technologies via the following links:
VMware vCloud Air home page including while the offer lasts complimentary service credits
What’s most important to know about my cloud privacy policy?
What this all means and wrap-up
Overall I like the VMware vCloud Air service which if you are VMware centric focused will be a familiar cloud option including integration with vCloud Director and other tools you may already have in your environment. Even if you are not familiar with VMware vSphere and associated vRealize tools, the vCloud service is intuitive enough that you can be productive fairly quickly. On one hand vCloud Air does not have the extensive menu of service offerings to choose from such as with AWS, Google, Azure or others, however that also means a simpler menu of options to choose from and simplify things.
I had wanted to spend some time actually using vCloud and the offer to use some free service credits in the production environment made it worth making the time to actually setup some workloads and do some testing. Even if you are not a VMware focused environment, I would recommend giving VMware vCloud Air a test drive to see what it can do for you, as opposed to what you can do for it…
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Modernizing Data Protection = Using new and old things in new ways
This is part of an ongoing series of posts that part of www.storageioblog.com/data-protection-diaries-main/ on data protection including archiving, backup/restore, business continuance (BC), business resiliency (BC), data footprint reduction (DFR), disaster recovery (DR), High Availability (HA) along with related themes, tools, technologies, techniques, trends and strategies.
Keep in mind that a fundamental goal of an Information Technology (IT) organization is to protect, preserve and serve data and information in a cost-effective as well as productive way when needed. There is no such thing as an information recession with more data being generated and processed. In addition to more of it, data is also getting larger, having more dependencies on it being available as well as living longer (e.g. retention).
Proof Points, No Data or Information Recession
A quick easy proof point of more data and it getting larger is your cell phone and the pictures it take. Compare the size of those photos today to what you had in your previous generation of smart phone or even digital camera as the Mega Pixels (e.g. resolution and size of data) increased, along with the size of media (e.g. storage) to save those to also grew. Another proof point is look at your presentations, documents, web sites and other mediums with how the amount of rich or unstructured content (e.g. photos, videos) exists on those now vs. a few years ago. Yet another proof-point is to look at your structured little data databases and how there are more rows and columns, as well as how some of those columns have gotten larger or are point to external "blobs" or "objects" that have also gotten larger.
Industry trend and challenges
There has been industry buzz the past several years around data protection modernizing, modernizing data protection or simply modernizing backup along with modernizing your data and information infrastructure. Many of these conversations focus around swapping out an older technology in favor of whatever the new industry buzzword trend is (e.g. swap tape for disk, disk for cloud) or perhaps from one data protection, backup, archive or copy tool for another. Some of these conversations also focus around swapping legacy for virtual, cloud or some other variation of software defined marketing.
The Opportunity to do new things
What is common with all the above is basically swapping out one technology, tool, medium or technique for another new one yet using it in old ways. For example tape gets swapped for disk, yet the same approach to when, where, why, how often and what gets copied or protected is left the same. Sure some new tools and technologies get introduced. However when was the last time you put the tools down, took a step back and revisited the fundamental questions of how and why you are doing data protection the way it is being done? When was the last time you thought about data protection as an asset or business enabler as opposed to a cost center, overhead or after thought?
What’s in your data protection toolbox, do you know what to use when?
What about modernizing beyond the tools
One of the challenges with modernizing is that there is a cost involved including people time, staff skills as well as budgets not to mention keeping things running, so how do you go about paying for any improvements? Sure you can go get a data infrastructure or habitat for technology aka data home improvement loan, however there are costs associated to that.
What about reducing data protection costs?
So why not self-fund the improvements and modernization activities by finding and removing costs, eliminating complexity vs. moving and masking issues? Part of this can be accomplished by simply revisiting if you are treating all your applications and data the same from a data protection perspective. Are you providing a data protection service ability to your organization that is based on business wants or business needs? For example, does the business want recovery time objective (RTO) 0 and recovery point objective (RPO) 0 for all applications, while it needs RTO 4 hours and RPO 15 minutes for application-a while application-b requires RTO 12 hours and RPO of 2 hours and application must have RTO 24 hours with RPO of 12 hours?
As a reminder RTO is how much time, or how quickly you need your applications and data to be restored and made ready for use. RPO is the point in time to where data needs to be protected as of, or the amount of data or time frame data could be lost or missing. Thus RTO = 0 means instant recovery no downtime and RPO = 0 means no loss of data. RTO one day and RPO of ten (10) minutes means applications and their data are ready for use within 24 hours and no more than 10 minutes of data can be lost (e.g. the granularity of protection coverage)., Also keep in mind that you can have various RTO and RPO combinations to meet your specific application along with business needs as part of a tiered data protection strategy implementation.
With RTO and RPO in mind, when was the last time you sat down with the business and applications people to revisit what they want vs. what they must have? From these conversation you can easily Transition into how long to keep, how many copies in what place among other things which in turn allows you to review data protection as well as start using both old and new technologies, tools and techniques in new ways.
Where to learn more
Learn more about data protection and related topics, themes, trends, tools and technologies via the following links:
How do primary storage clouds and cloud for backup differ?
What’s most important to know about my cloud privacy policy?
What this all means and wrap-up
Data protection is a broad topic that spans from logical and physical security to HA, BC, BR, DR, archiving (including life beyond compliance) along with various tools, technologies, techniques. Key is aligning those to the needs of the business or organization for today’s as well as tomorrows requirements. Instead of doing things what has been done in the past that may have been based on what was known or possible due to technology capabilities, why not start using new and old things in new ways. Let’s start using all the tools in the data protection toolbox regardless of if they are new or old, cloud, virtual, physical, software defined product or service in new ways while keeping the requirements of the business in focus.
Keeping with the theme of protect preserve and serve, data protection to be modernized needs to become and be seen as a business asset or enabler vs. an after thought or cost over-head topic. Also, keep in mind that only you can prevent data loss, are your restores ready for when you need them? as well as one of the fundamental goals of IT is to protect, preserve and serve information including its applications as well as data when, where needed in a cost-effective way.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Hello and welcome to this April 2015 Server and StorageIO update newsletter.
This months newsletter has a focus on cloud and object storage for bulk data, unstructured data, big data, archiving among other scenarios.
Enjoy this edition of the Server and StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and Podcasts along with in the news commentary appearing soon.
April Newsletter Feature Theme Cloud and Object Storage Fundamentals
There are many facets to object storage including technology implementation, products, services, access and architectures for various applications and use scenarios. The following is a short synopsis of some basic terms and concepts associated with cloud and object storage.
Common cloud and object storage terms
Account or project – Top of the hierarchy that represent owner or billing information for a service that where buckets are also attached.
Availability Zone (AZ) can be rack of servers and storage or data center where data is spread across for storage and durability.
Bucket or Container – Where objects or sub-folders containing objects are attached and accessed. Note in some environments such as AWS S3 you can have sub-folders in a bucket.
Connector or how your applications access the cloud or object storage such as via an API, S3, Swift, Rest, CDMI, Torrent, JSON, NAS file, block of other access gateway or software.
Durability – Data dispersed with copies in multiple locations to survive failure of storage or server hardware, software, zone or even region. Availability = Access + Durability.
End-point – Where or what your software, application or tool and utilities or gateways attach to for accessing buckets and objects.
Ephemeral – Temporary or non-persistent
Eventual consistency – Data is eventually made consistency, think in terms of asynchronous or deferred writes where there is a time lag vs. synchronous or real-time updates.
Immutable – Persistent, non-altered or write once read many copy of data. Objects generally are not updated, rather new objects created.
Object – Byte (or bit) stream that can be as small as one byte to as large as several TBytes (some solutions and services support up to 5TByte sized objects). The object contains what ever data in any organization along with meta data. Different solutions and services support from a couple hundred KBytes of meta-data to MBytes worth of meta-data. In terms of what can be stored in an object, anything from files, videos, images, virtual disks (VMDK’s, VHDX), ZIP or tar files, backup and archive save sets, executable images or ISO’s, anything you want.
OPS – Objects per second or how many objects accessed similar to a IOP. Access includes gets, puts, list, head, deletes for a CRUD interface e.g. Created, Read, Update, Delete.
Region – Location where data is stored that can include one or more data centers also known as Availability Zones.
Sub-folder – While object storage can be accessed in a flat name space for commonality and organization some solutions and service support the notion of sub-folder that resemble traditional directory hierarchy.
AWS recently announced their new cloud based Elastic File Storage (EFS) to compliment their existing Elastic Block Storage (EBS) offerings. However are you aware of what is going on with cloud files within OpenStack?
For those who are familiar with OpenStack or simply talk about it and Swift object storage, or perhaps Cinder block storage, are you aware that there is also a file (NAS or Network Attached Storage) component called Manila?
In concept Manila should provide a similar capability to what AWS has recently announce with their Elastic File Service (EFS), or depending on your perspective, perhaps the other way around. If you are familiar and have done anything with Manila what are your initial thoughts and perspectives.
What this all means
People routinely tell me this is the most exciting and interesting times ever in servers, storage, I/O networking, hardware, software, backup or data protection, performance, cloud and virtual or take your pick too which I would not disagree.
However, for the past several years (no, make that decade), there is new and more interesting things including in adjacent areas.
I predict that at least for the next few years (no, make that decades), we will continue to see plenty of new and interesting things, questions include.
However, what’s applicable to you and your environment vs. simply fun and interesting to watch?
Data Protection Gumbo Podcast Protect Preserve and Serve Data
In this episode, Greg Schulz is a guest on Data Protection Gumbo hosted by Demetrius Malbrough(@dmalbrough). The conversation covers various aspects of data protection which has a focus of protect preserve and serve information, applications and data across different environments and customer segments.
While we discuss enterprise and SMB data protection, we also talk about trends from Mobile to the cloud among many others tools, technologies and techniques. Check out the podcast here.
Springtime in Kentucky With Kendrick Coleman of EMCcode Cloud Object Storage S3motion and more
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Data Protection Gumbo Podcast Description Data Protection Gumbo is set up with the aim of expanding the awareness of anyone responsible for protecting mission critical data, by providing them with a mix of the latest news, data protection technologies, and interesting facts on topics in the Data Backup and Recovery industry.
Protect Preserve and Serve Applications, Information and Data
Keep in mind that a fundamental role of Information Technology (IT) is to protect, preserve and serve business or organizations information assets including applications, configuration settings and data for use when or where needed.
Our conversation covers various aspects of data protection which has a focus of protect preserve and serve information, applications and data across different environments and customer segments. While we discuss enterprise and small medium business (SMB) data protection, we also talk about trends from Mobile to the cloud among many others tools, technologies and techniques.
Where to learn more
Learn more about data protection and related trends, tools and technologies via the following links:
Data protection is a broad topic that spans from logical and physical security to high availability (HA), disaster recovery (DR), business continuance (BC), business resiliency (BR), archiving (including life beyond compliance) along with various tools, technologies, techniques. Keeping with the theme of protect preserve and serve, data protection to be modernized needs to become and be seen as a business asset or enabler vs. an after thought or cost over-head topic. Also, keep in mind that only you can prevent data loss, are your restores ready for when you need them?
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
S3motion Buckets Containers Objects AWS S3 Cloud and EMCcode
It’s springtime in Kentucky and recently I had the opportunity to have a conversation with Kendrick Coleman to talk about S3motion, Buckets, Containers, Objects, AWS S3, Cloud and Object Storage, node.js, EMCcode and open source among other related topics which are available in a podcast here, or video here and available at StorageIO.tv.
In this Server StorageIO industry trends perspective podcast episode, @EMCcode (Part of EMC) developer advocate Kendrick Coleman (@KendrickColeman) joins me for a conversation. Our conversation spans spring-time in Kentucky (where Kendrick lives) which means Bourbon and horse racing as well as his blog (www.kendrickcoleman.com).
Btw, in the podcast I refer to Captain Obvious and Kendrick’s beard, for those not familiar with who or what @Captainobvious is that is made reference to, click here to learn more.
What about Clouds Object Storage Programming and other technical stuff?
Of course we also talk some tech including what is EMCcode, EMC Federation, Cloud Foundry, clouds, object storage, buckets, containers, objects, node.js, Docker, Openstack, AWS S3, micro services, and the S3motion tool that Kendrick developed.
Kendrick explains the motivation behind S3motion along with trends in and around objects (including GET, PUT vs. traditional Read, Write) as well as programming among related topic themes and how context matters.
I have used S3motion for moving buckets, containers and objects around including between AWS S3, Google Cloud Storage (GCS) and Microsoft Azure as well as to/from local. S3motion is a good tool to have in your server storage I/O tool box for working with cloud and object storage along with others such as Cloudberry, S3fs, Cyberduck, S3 browser among many others.
How do primary storage clouds and cloud for backup differ?
What’s most important to know about my cloud privacy policy?
Also available on
What this all means and wrap-up
Context matters when it comes to many things particular about objects as they can mean different things. Tools such as S3motion make it easy for moving your buckets or containers along with objects from one cloud storage system, solution or service to another. Also check out EMCcode to see what they are doing on different fronts from supporting new and greenfield development with Cloud Foundry and PaaS to Openstack to bridging current environments to the next generation of platforms. Also check out Kendricks blog site as he has a lot of good technical content as well as some other fun stuff to learn about. Look forward to having Kendrick on as a guest again soon to continue our conversations. In the meantime, check out S3motion to see how it can fit into your server storage I/O tool box.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved