| ||||||||||||||||||||||||||||||||
August 2014 Industry trend and perspectivesThe following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes. StorageIO comments and perspectives in the newsVirtual Desktop Infrastructures (VDI) remains a popular industry and IT customer topic, not to mention being one of the favorite themes of Solid State Device (SSD) vendors. SSD component and system solution vendors along with their supporters love VDI as the by-product of aggregation (e.g. consolidation) which applies to VDI is aggravation. Aggravation is the result of increased storage I/O performance (IOP’s, bandwidth, response time) from consolidating the various desktops. It should not be a surprise that some of the biggest fans encouraging organizations to adopt VDI are the SSD vendors. Read some of my comments and perspectives on VDI here at FedTech Magazine. Speaking of virtualizing the data center, software defined data centers (SDDC) along with software defined networking (SDN) and software defined storage (SDS) remain popular including some software defined marketing (SDM). Here are some of my comments and perspectives moving beyond the hype of SDDC. Recently the Fibre Channel Industry Association (FCIA) who works with the T11 standards body of both legacy or classic Fibre Channel (FC) as well as newer FC over Ethernet (FCoE) made some announcements. These announcements including enhancements such as Fibre Channel Back Bone version 6 (FC-BB-6) among others. Both FC and FCoE are alive and doing well, granted one has been around longer (FC) and can be seen at its plateau while the other (FCoE) continues to evolve and grow in adoption. In some ways, FCoE is in a similar role today to where FC was in the late 90s and early 2000s ironically facing some common fud. You can read my comments here as part of a quote in support of the announcement , along with my more industry trend perspectives in this blog post here. Buyers guides are popular with both vendors, VAR’s as well as IT organizations (e.g. customers) following are some of my comments and industry trend perspectives appearing in Enterprise Storage Forum. Here are perspectives on buyers guides for Enterprise File Sync and Share (EFSS), Unified Data Storage and Object Storage. EMC has come under pressure as mentioned in earlier StorageIO update newsletters to increase its shareholder benefit including spin-off of VMware. Here are some of my comments and perspectives that appeared in CruxialCIO. Read more industry trends perspectives comments on the StorageIO news page. StorageIO video and audio pod casts
StorageIOblog posts and perspectivesDespite being declared dead, traditional or classic Fibre Channel (FC) along with FC over Ethernet (FCoE) continues to evolve with FC-BB-6, read more here. VMworld 2014 took place this past week and included announcements about EVO:Rack and Rail (more on this in a future edition). You can get started learning about EVO:Rack and RAIL at Duncan Epping (aka @DuncanYB) Yellow Bricks site. VMware Virtual SAN (VSAN) is at the heart of EVO which you can read an overview here in this earlier StorageIO update newsletter (March 2014). Also watch for some extra content that I’m working on including some video podcasts articles and blog posts from my trip to VMworld 2014. However one of the themes in the background of VMworld 2014 is the current beta of VMware vSphere V6 along with Virtual Volumes aka VVOL’s. The following are a couple of my recent posts including primer overview of VVOL’s along with a poll you can cast your vote. Check out Are VMware VVOL’s in your virtual server and storage I/O future? and VMware VVOL’s and storage I/O fundamentals (Part 1) along with (Part 2). | ||||||||||||||||||||||||||||||||
StorageIO events and activities
The StorageIO calendar continues to evolve including several new events being added for September and well into the fall with more in the works including upcoming Dutch European sessions the week of October 6th in Nijkerk Holland (learn more here). The following are some upcoming September events. These include live in-person seminars, conferences, keynote and speaking activities as well as on-line webinars, twitter chats, Google+ hangouts among others.
Note: Dates, times, venues and subject contents subject to change, refer to events page for current status Click here to view other upcoming along with earlier event activities. Watch for more 2014 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, software defined, big data, little data, cloud and object storage, performance and management trends among others. Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events. | ||||||||||||||||||||||||||||||||
Server and StorageIO Technology Tips and Tools
In addition to the industry trends and perspectives comments in the news mentioned above, along with the StorageIO blog posts, the following are some of my recent articles and tips that have appeared in various industry venues. Over at the new Storage Acceleration site I have a couple of pieces, the first is What, When, Why & How to Accelerate Storage and the other is Tips for Measuring Your Storage Acceleration. | ||||||||||||||||||||||||||||||||
StorageIO Update Newsletter ArchivesClick here to view earlier StorageIO Update newsletters (HTML and PDF versions) at www.storageio.com/newsletter. Subscribe to this newsletter (and pass it along) by clicking here (Via Secure Campaigner site). View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter | ||||||||||||||||||||||||||||||||
Ok, nuff said (for now) Cheers gs Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved |
VMware VVOLs and storage I/O fundementals (Part 2)
VMware VVOL’s and storage I/O fundamentals (Part II)
Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g.VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).
Picking up from where we left off in the first part of the VMware VVOL’s and storage I/O fundamentals, lets take a closer look at VVOL’s.
First however lets be clear that while VMware uses terms including object and object storage in the context of VVOL’s, its not the same as some other object storage solutions. Learn more about object storage here at www.objectstoragecenter.com
Are VVOL’s accessed like other object storage (e.g. S3)?
No, VVOL’s are accessed via the VMware software and associated API’s that are supported by various storage providers. VVOL’s are not LUN’s like regular block (e.g. DAS or SAN) storage that use SAS, iSCSI, FC, FCoE, IBA/SRP, nor are they NAS volumes like NFS mount points. Likewise VVOL’s are not accessed using any of the various object storage access methods mentioned above (e.g. AWS S3, Rest, CDMI, etc) instead they are an application specific implementation. For some of you this approach of an applications specific or unique storage access method may be new, perhaps revolutionary, otoh, some of you might be having a DejaVu moment right about now.
VVOL is not a LUN in the context of what you may know and like (or hate, even if you have never worked with them), likewise it is not a NAS volume like you know (or have heard of), neither are they objects in the context of what you might have seen or heard such as S3 among others.
Keep in mind that what makes up a VMware virtual machine are the VMK, VMDK and some other files (shown in the figure below), and if enough information is known about where those blocks of data are or can be found, they can be worked upon. Also keep in mind that at least near-term, block is the lowest common denominator that all file systems and object repositories get built-up.
VMware ESXi storage I/O, IOPS and data store basics
Here is the thing, while VVOL’s will be accessible via a block interface such as iSCSI, FC or FCoE or for that matter, over Ethernet based IP using NFS. Think of these storage interfaces and access mechanisms as the general transport for how vSphere ESXi will communicate with the storage system (e.g. their data path) under vCenter management.
What is happening inside the storage system that will be presented back to ESXi will be different than a normal SCSI LUN contents and only understood by VMware hypervisor. ESXi will still tell the storage system what it wants to do including moving blocks of data. The storage system however will have more insight and awareness into the context of what those blocks of data mean. This is how the storage systems will be able to more closely integrate snapshots, replication, cloning and other functions by having awareness into which data to move, as opposed to moving or working with an entire LUN where a VMDK may live. Keep in mind that the storage system will still function as it normally would, just think of VVOL as another or new personality and access mechanism used for VMware to communicate and manage storage.
VMware VVOL concepts (in general) with VMDK being pushed down into the storage system
Think in terms of the iSCSI (or FC or something else) for block or NFS for NAS as being the addressing mechanism to communicate between ESXi and the storage array, except instead of traditional SCSI LUN access and mapping, more work and insight is pushed down into the array. Also keep in mind that with a LUN, it is simply an address from what to use Logical Block Numbers or Logical Block Addresses. In the case of a storage array, it in turn manages placement of data on SSD or HDDs in turn using blocks aka LBA/LBN’s In other words, a host that does not speak VVOL would get an error if trying to use a LUN or target on a storage system that is a VVOL, that’s assuming it is not masked or hidden ;).
What’s the Storage Provider (SP)
The Storage Provider aka SP is created by the, well, the provider of the storage system or appliance leveraging a VMware API (hint, sign up for the beta and there is an SDK). Simply put, the SP is a two-way communication mechanism leveraging VASA for reporting information, configuration and other insight up to VMware ESXi hypervisor, vCenter and other management tools. In addition the storage provider receives VASA configuration information from VMware about how to configure the storage system (e.g. storage containers). Keep in mind that the SP is the out of band management interface between the storage system supporting and presenting VVOL’s and VMware hypervisors.
What’s the Storage Container (SC)
This is a storage pool created on the storage array or appliance (e.g. VMware vCenter works with array and storage provider (SP) to create) in place of using a normal LUN. With a SP and PE, the storage container becomes visible to ESXi hosts, VVOL’s can be created in the storage container until it runs out of space. Also note that the storage container takes on the storage profile assigned to it which are inherited by the VVOLs in it. This is in place of presenting LUN’s to ESXi that you can then create VMFS data stores (or use as raw) and then carve storage to VMs.
Protocol endpoint (PE)
The PE provides visibility for the VMware hypervisor to see and access VMDK’s and other objects (e.g. .vmx, swap, etc) stored in VVOL’s. The protocol endpoint (PE) manages or directs I/O received from the VM enabling scaling across many virtual volumes leveraging multipathing of the PE (inherited by the VVOL’s.). Note that for storage I/O operations, the PE is simply a pass thru mechanism and does not store the VMDK or other contents. If using iSCSI, FC, FCoE or other SAN interface, then the PE works on a LUN basis (again not actually storing data), and if using NAS NFS, then with a mount point. Key point is that the PE gets out-of-the-way.
VVOL Poll
What are you VVOL plans, view results and cast your vote here
Wrap up (for now)
There certainly are many more details to VVOL’s. that you can get a preview of in the beta, a well as via various demos, webinars, VMworld sessions as more becomes public. However for now, hope you found this quick overview on VVOL’s. of use, since VVOL’s. at the time of this writing are not yet released, you will need to wait for more detailed info, or join the beta or poke around the web (for now). Also if you have not seen the first part overview to this piece, check it out here as I give some more links to get you started to learn more about VVOL’s.
Keep an eye on and learn more about VVOL’s. at VMworld 2014 as well as in various other venues.
IMHO VVOL’s. are or will be in your future, however the question will be is there going to be a back to the future moment for some of you with VVOL’s.?
What VVOL questions, comments and concerns are in your future and on your mind?
Ok, nuff said (for now)
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
VMware VVOLs storage I/O fundementals (Part 1)
VMware VVOL’s storage I/O fundamentals (Part I)
Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g.VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).
Some of you may already be participating in the VMware beta of VVOL involving one of the initial storage vendors also in the beta program.
Ok, now let’s go a bit deeper, however if you want some good music to listen to while reading this, check out @BruceRave GoDeepMusic.Net and shows here.
Taking a step back, digging deeper into Storage I/O and VVOL’s fundamentals
Instead of a VM host accessing its virtual disk (aka VMDK) which is stored in a VMFS formatted data store (part of ESXi hypervisor) built on top of a SCSI LUN (e.g. SAS, SATA, iSCSI, Fibre Channel aka FC, FCoE aka FC over Ethernet, IBA/SRP, etc) or an NFS file system presented by a storage system (or appliance), VVOL’s push more functionality and visibility down into the storage system. VVOL’s shift more intelligence and work from the hypervisor down into the storage system. Instead of a storage system simply presenting a SCSI LUN or NFS mount point and having limited (coarse) to no visibility into how the underlying storage bits, bytes as well as blocks are being used, storage systems gain more awareness.
Keep in mind that even files and objects still get ultimately mapped to pages and blocks aka sectors even on nand flash-based SSD’s. However also keep an eye on some new technology such as the Seagate Kinetic drive that instead of responding to SCSI block based commands, leverage object API’s and associated software on servers. Read more about these emerging trends here and here at objectstoragecenter.com. |
With a normal SCSI LUN the underlying storage system has no knowledge of how the upper level operating system, hypervisor, file system or application such as a database (doing raw IO) is allocating the pages or blocks of memory aka storage. It is up to the upper level storage and data management tools to map from objects and files to the corresponding extents, pages and logical block address (LBA) understood by the storage system. In the case of a NAS solution, there is a layer of abstractions placed over the underlying block storage handling file management and the associated file to LBA mapping activity.
Storage I/O and IOP basics and addressing: LBA’s and LBN’s
Getting back to VVOL, instead of simply presenting a LUN which is essentially a linear range of LBA’s (think of a big table or array) that the hypervisor then manages data placement and access, the storage system now gains insight into what LBA’s correspond to various entities such as a VMDK or VMX, log, clone, swap or other VMware objects. With this more insight, storage systems can now do native and more granular functions such as clone, replication, snapshot among others as opposed to simply working on a coarse LUN basis. The similar concepts extend over to NAS NFS based access. Granted, there are more to VVOL’s including ability to get the underlying storage system more closely integrated with the virtual machine, hypervisor and associated management including supported service manage and classes or categories of service across performance, availability, capacity, economics.
What about VVOL, VAAI and VASA?
VVOL’s are building from earlier VMware initiatives including VAAI and VASA. With VAAI, VMware hypervisor’s can off-load common functions to storage systems that support features such as copy, clone, zero copy among others like how a computer can off-load graphics processing to a graphics card if present.
VASA however provides a means for visibility, insight and awareness between the hypervisor and its associated management (e.g. vCenter etc) as well as the storage system. This includes storage systems being able to communicate and publish to VMware its capabilities for storage space capacity, availability, performance and configuration among other things.
With VVOL’s VASA gets leveraged for unidirectional (e.g. two-way) communication where VMware hypervisor and management tools can tell the storage system of things, configuration, activities to do among others. Hence why VASA is important to have in your VMware CASA.
What’s this object storage stuff?
VVOL’s are a form of object storage access in that they differ from traditional block (LUN’s) and files (NAS volumes/mount points). However, keep in mind that not all object storage are the same as there are object storage access and architectures.
Object Storage basics, generalities and block file relationships
Avoid making the mistake of when you hear object storage that means ANSI T10 (the folks that manage the SCSI command specifications) Object Storage Device (OSD) or something else. There are many different types of underlying object storage architectures some with block and file as well as object access front ends. Likewise there are many different types of object access that sit on top of object architectures as well as traditional storage system.
An example of how some object storage gets accessed (not VMware specific)
Also keep in mind that there are many different types of object access mechanism including HTTP Rest based, S3 (e.g. a common industry defacto standard based on Amazon Simple Storage Service), SNIA CDMI, SOAP, Torrent, XAM, JSON, XML, DICOM, IL7 just to name a few, not to mention various programmatic bindings or application specific implementations and API’s. Read more about object storage architectures, access and related topics, themes and trends at www.objecstoragecenter.com
Lets take a break here and when you are ready, click here to read the third piece in this series VMware VVOL’s and storage I/O fundamentals Part 2.
Ok, nuff said (for now)
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Are VMware VVOLs in your virtual server and storage I/O future?
Are VMware VVOL’s in your virtual server and storage I/O future?
Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).
With VMworld 2014 just around the corner, for some of you the question is not if Virtual Volumes (VVOL’s) are in your future, rather when, where, how and with what.
What this means is that for some hands on beta testing is already occurring or will be soon, while for others that might be around the corner or down the road.
Some of you may already be participating in the VMware beta of VVOL involving one of the first storage vendors also in the beta program.
On the other hand, some of you may not be in VMware centric environments and thus VVOL’s may not yet be in your vocabulary.
How do you know if VVOL are in your future if you don’t know what they are?
First, to be clear, as of the time this was written VMware VVOL’s are not released and only in beta as well as having been covered in earlier VMworld’s. Consequently what you are going to read here is based on what VVOL material has already been made public in various venues including earlier VMworld’s and VMware blogs among other places.
The quick synopsis of VMware VVOL’s overview:
VVOL considerations and your future
As mentioned, as of this writing, VVOL’s are still a future item granted they exist in beta.
For those of you in VMware environments, now is the time to add VVOL to your vocabulary which might mean simply taking the time to read a piece like this, or digging deeper into the theories of operations, configuration, usage, hints and tips, tutorials along with vendor specific implementations.
Explore your options, and ask yourself, do you want VVOL or do you need it
What support does your current vendor(s) have for VVOL or what is their statement of direction (SOD) which you might have to get from them under NDA.
This means that there will be some first vendors with some of their products supporting VVOL’s with more vendors and products following (hence watch for many statements of direction announcements).
Speaking of vendors, watch for a growing list of vendors to announce their current or plans for supporting VVOL’s, not to mention watch some of them jump up and down like Donkey in Shrek saying "oh oh pick me pick me".
When you ask a vendor if they support VVOL’s, move beyond the simple yes or no, ask which of their specific products, it is a block (e.g. iSCSI) or NAS file (e.g. NFS) based and other caveats or configuration options.
Watch for more information about VVOL’s in the weeks and months to come both from VMware along with from their storage provider partners.
How will VVOL impact your organizations best practices, policies, workflow’s including who does what, along with associated responsibilities.
Where to learn more
Check out the companion piece to this that takes a closer look at storage I/O and VMware VVOL fundamentals here and here.
Also check out this good VMware blog via Cormac Hogan (@CormacJHogan) that includes a video demo, granted its from 2012, however some of this stuff actually does take time and thus this is very timely. Speaking of VMware, Duncan Epping (aka @DuncanYB) at his Yellow-Bricks site has some good posts to check out as well with links to others including this here. Also check out the various VVOL related sessions at VMworld as well as the many existing, and soon to be many more blogs, articles and videos you can find via Google. And if you need a refresher, Why VASA is important to have in your VMware CASA.
Of course keep an eye here or whichever venue you happen to read this for future follow-up and companion posts, and if you have not done so, sign up for the beta here as there are lots of good material including SDKs, configuration guides and more.
VVOL Poll
What are you VVOL plans, view results and cast your vote here
Wrap up (for now)
Hope you found this quick overview on VVOL’s of use, since VVOL’s at the time of this writing are not yet released, you will need to wait for more detailed info, or join the beta or poke around the web (for now).
Keep an eye on and learn more about VVOL’s at VMworld 2014 as well as in various other venues.
IMHO VVOL’s are or will be in your future, however the question will be is there going to be a back to the future moment for some of you with VVOL’s?
Also what VVOL questions, comments and concerns are in your future and on your mind?
And remember to check out the second part to this series here.
Ok, nuff said (for now)
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Despite being declared dead, Fibre Channel continues to evolve with FC-BB-6
Despite being declared dead, Fibre Channel continues to evolve with FC-BB-6
Like many technologies that have been around for more than a decade or two, they often get declared dead when something new appears and Fibre Channel (FC) for networking with your servers and storage falls into that category. It seems like just yesterday when iSCSI was appearing on the storage networking scene in the early 2000s that FC was declared dead yet it remains and continues to evolve including moving over Ethernet with FC over Ethernet (FCoE).
Recently the Fibre Channel Industry Association (FCIA) made an announcement on the continued development and enhancements including FC-BB-6 that applies to both "classic" or "legacy" FC as well as the newer and emerging FCoE implementations. FCIA is not alone in the FCIA activity as they are as the name implies the industry consortium that works with the T11 standards folks. T11 is a Technical Committee of the International Committee on Information Technology Standards (INCITS, pronounced "insights").
Keep in mind that a couple of pieces to Fibre Channel which are the upper levels and lower level transports.
With FCoE, the upper level portions get mapped natively on Ethernet without having to map on top of IP as happens with distance extension using FCIP.
Likewise FCoE is more than simply mapping one of the FC upper level protocols (ULPs) such as the SCSI command set (aka SCSI_FCP) on IP (e.g. iSCSI). Think of ULPs almost in a way as a guest that gets transported or carried across the network, however lets also be careful not to play the software defined network (SDN) or virtual network, network virtualization or IO virtualization (IOV) card, or at least yet, we will leave that up to some creative marketers ;).
At the heart of the Fibre Channel beyond the cable and encoding scheme are a set of protocols, command sets and one in particular is FC Backbone now in its 6th version (read more here at the T11 site, or here at the SNIA site).
Some of the highlights of the FCIA announcement include:
VN2VN connectivity support enabling direct point to point virtual links (not to be confused with point to point physical cabling) between nodes in an FCoE network simplifying configurations for smaller SAN networks where zoning might not be needed (e.g. remove complexity and cost).
Support for Domain ID scalability including more efficient use by FCoE fabrics enabling large scalability of converged SANs. Also keep an eye on the emerging T11 FC-SW-6 distributed switch architecture for implementation over Ethernet in final stages of development.
Here are my perspectives on this announcement by the FCIA:
"Fibre Channel is a proven protocol for networked data center storage that just got better," said Greg Schulz, founder StorageIO. "The FC-BB-6 standard helps to unlock the full potential of the Fibre Channel protocol that can be implemented on traditional Fibre Channel as well as via Ethernet based networks. This means FC-BB-6 enabled Fibre Channel protocol based networks give flexibility, scalability and secure high-performance resilient storage networks to be implemented." |
Both "classic" or "legacy" Fibre Channel based cabling and networking are still alive with a road map that you can view here.
However FCoE also continues to mature and evolve and in some ways, FC-BB-6 and its associated technologies and capabilities can be seen as the bridge between the past and the future. Thus while the role of both FC and FCoE along with other ways of of networking with your servers and storage continue to evolve, so to does the technology. Also keep in mind that not everything is the same in the data center or information factory which is why we have different types of server, storage and I/O networks to address different needs, requirements and preferences.
Additional reading and viewing on FC, FCoE and storage networking::
Ok, nuff said (for now)
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
July 2014 Server and StorageIO Update newsletter
Server and StorageIO Update newsletter – July 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
AWS adds Zocalo Enterprise File Sync Share and Collaboration
AWS adds Zocalo Enterprise File Sync Share and Collaboration
In case you missed it today, Amazon Web Services (AWS) announced Zocalo an enterprise class storage and file sharing service. As you might have guessed, by being file sync and share of cloud storage Zocalo can be seen as a competitor or option to other services including Box, Dropbox and Google among many others in the enterprise file sync and share (EFSS) space.
AWS Enterprise File Sync Share (EFSS) Zocalo overview and summary:
- Document collaboration (Comments and sharing) including available with AWS WorkSpaces
- Central common hub for sharing documents along with those owned by a user
- Select AWS regions where data is stored, along with set up users polices and audit trails
- Sharing of various types of documents, worksheets, web pages, presentations, text and PDF among other files
- Support for Windows and other PCs, Macs, tablets and other mobile devices
- Cost effective (priced at $5 per user per month for 200GB of storage)
- Free 30 day trial for up to 50 users each with 200GB (e.g. 10TB)
- Secure leveraging existing AWS regions and tools (encryption in transit and while at rest)
- Active directory credentials integration
Learn more in the Zocalo FAQ found here
Register for the limited free Zocalo trial here
Additional Zocalo product details can be found here
AWS also announced as part of its Mobile Services Cognito a mobile service for simple user identity and data synchronization, along with SNS, Mobile Analytics and other enhancements. Learn more about AWS Cognito here and Mobile Services here.
Check out other AWS updates, news and enhancements here
Ok, nuff said
Cheers
gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
June 2014 Server and StorageIO Update newsletter
Server and StorageIO Update newsletter – June 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
April and May 2014 Server and StorageIO Update newsletter
Server and StorageIO Update newsletter – April and May 2014 | ||||||||||||||||||||||||||||||||||||||||||||
|
Is there an information or data recession? Are you using less storage? (With Polls)
Is there an information or data recession? Are you using less storage? (With Polls)
Is there an information recession where you are creating, processing, moving or saving less data?
Are you using less data storage than in the past either locally online, offline or remote including via clouds?
IMHO there is no such thing as a data or information recession, granted storage is being used more effectively by some, while economic pressures or competition enables your budgets to be stretched further. Likewise people and data are living longer and getting larger.
In conversations with IT professionals particular the real customers (e.g. not vendors, VAR’s, analysts, blogalysts, consultants or media) I routinely hear from people that they continue to have the need to store more information, however they’re data storage usage and acquisition patterns are changing. For some this means using what they have more effectively leveraging data footprint reduction (DFR) which includes (archiving, compression, dedupe, thin provision, changing how and when data is protected). This also means using different types of storage from flash SSD to HDD to SSHD to tape summit resources as well as cloud in different ways spanning block, file and object storage local and remote.
A common question that comes up particular around vendor earnings announcement times is if the data storage industry is in decline with some vendors experience poor results?
Look beyond vendor revenue metrics
As a back ground reading, you might want to check out this post here (IT and storage economics 101, supply and demand) which candidly should be common sense.
If all you looked at were a vendors revenues or margin numbers as an indicator of how well such as the data storage industry (includes traditional, legacy as well as cloud) you would not be getting the picture.
What needs to be factored into the picture is how much storage is being shipped (from components such as drives to systems and appliances) as well as delivered by service providers.
Looking at storage systems vendors from a revenue earnings perspective you would get mixed indicators depending on who you include, not to mention on how those vendors report break of revenues by product, or amount units shipped. For example looking at public vendors EMC, HDS, HP, IBM, NetApp, Nimble and Oracle (among others) as well as the private ones (if you can see the data) such as Dell, Pure, Simplivity, Solidfire, Tintri results in different analysis. Some are doing better than others on revenues and margins, however try to get clarity on number of units or systems shipped (for actual revenue vs. loaners (planting seeds for future revenue or trials) or demos).
Then look at the service providers such as AWS, Centurlylink, Google, HP, IBM, Microsoft Rackspace or Verizon (among others) you should see growth, however clarity about how much they are actually generating on revenues plus margin for storage specific vs. broad general buckets can be tricky.
Now look at the component suppliers such as Seagate and Western Digital (WD) for HDDs and SSHDs who also provide flash SSD drives and other technology. Also look at the other flash component suppliers such as Avago/LSI whose flash business is being bought by Seagate, FusionIO, SANdisk, Samsung, Micron and Intel among others (this does not include the systems vendors who OEM those or other products to build systems or appliances). These and other component suppliers can give another indicator as to the health of the industry both from revenue and margin, as well as footprint (e.g. how many devices are being shipped). For example the legacy and startup storage systems and appliance vendors may have soft or lower revenue numbers, however are they shipping the same or less product? Likewise the cloud or service providers may be showing more revenues and product being acquired however at what margin?
What this all means?
Look at revenue numbers in the proper context as well as in the bigger picture.
If the same number of component devices (e.g. processors, HDD, SSD, SSHD, memory, etc) are being shipped or more, that is an indicator of continued or increased demand. Likewise if there is more competition and options for IT organizations there will be price competition between vendors as well as service providers.
All of this means that while IT organizations budgets stay stretched, their available dollars or euros should be able to buy (or rent) them more storage space capacity.
Likewise using various data and storage management techniques including DFR, the available space capacity can be stretched further.
So this then begs the question of if the management of storage is important, why are we not hearing vendors talking about software defined storage management vs. chasing each other to out software define storage each other?
Ah, that’s for a different post ;).
So what say you?
Are you using less storage?
Do you have less data being created?
Are you using storage and your available budget more effectively?
Please take a few minutes and cast your vote (and see the results).
Sorry I have no Amex or Amazon gift cards or other things to offer you as a giveaway for participating as nobody is secretly sponsoring this poll or post, it’s simply sharing and conveying information for you and others to see and gain insight from.
Do you think that there is an information or data recession?
How about are you using or buying more storage, could there be a data storage recession?
Some more reading links
IT and storage economics 101, supply and demand
Green IT deferral blamed on economic recession might be result of green gap
Industry trend: People plus data are aging and living longer
Is There a Data and I/O Activity Recession?
Supporting IT growth demand during economic uncertain times
The Human Face of Big Data, a Book Review
Garbage data in, garbage information out, big data or big garbage?
Little data, big data and very big data (VBD) or big BS?
Ok, nuff said (for now)
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy
Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy
The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future.
Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way.
Introducing Solid State Hybrid Drives (SSHD)
Solid State Hybrid Disks (SSHD) are the successors to previous generation Hybrid Hard Disk Drives (HHDD) that I have used for several years (you can read more about them here, and here).
While it would be nice to simply have SSD for everything, there are also economic budget realities to be dealt with. Keep in mind that a bit of nand flash SSD cache in the right location for a given purpose can go a long way which is the case with SSHDs. This is also why in many environments today there is a mix of SSD, HDD of various makes, types, speeds and capacities (e.g. different tiers) to support diverse application needs (e.g. not everything in the data center is the same).
However, If you have the need for speed and can afford or benefit from the increased productivity by all means go SSD!
Otoh if you have budget constraints and need more space capacity yet want some performance boost, then SSHDs are an option. The big difference however between today’s SSHDs that are available for both enterprise class storage systems and servers, as well as desktop environments is that they can accelerate both reads and writes. This is different from their predecessors that I have used for several years now that had basic read acceleration, however no write optimizations.
Better Together: Where SSHDs fit in an enterprise tiered storage environment with SSD and HDDs
As their names imply, they are a hybrid between a nand flash Solid State Device (SSD) and traditional Hard Disk Drive (HDD) meaning a best of situation. This means that the SSHD are based on a traditional spinning HDD (various models with different speeds, space capacity, interfaces) along with DRAM (which is found on most modern HDDs), along with nand flash for read cache, and some extra nonvolatile memory for persistent write cache combined with a bit of software defined storage performance optimization algorithms.
Btw, if you were paying attention to that last sentence you would have picked up on something about nonvolatile memory being used for persistent write cache which should prompt the question would that help with nand flash write endurance? Yup.
Where and when to use SSHD?
In the StorageIO Industry Trends Perspective thought leadership white paper I recently released compliments of Seagate Enterprise Turbo SSHD (that’s a disclosure btw ;) enterprise class Solid State Hybrid Drives (SSHD) were looked at and test driven in the StorageIO Labs with various application workloads. These activities include being in a virtual environment for common applications including database and email messaging using industry standard benchmark workloads (e.g. TPC-B and TPC-E for database, JetStress for Exchange).
Conventional storage system focused workloads using iometer, iorate and vdbench were also run in the StorageIO Labs to set up baseline reads, writes, random, sequential, small and large I/O size with IOPs, bandwidth and response time latency results. Some of those results can be found here (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?) with other ongoing workloads continuing in different configurations. The various test drive proof points were done in the comparing SSHD, SSD and different HDDs.
Data Protection (Archiving, Backup, BC, DR) | Staging cache buffer area for snapshots, replication or current copies before streaming to other storage tier using fast read/write capabilities. Meta data, index and catalogs benefit from fast reads and writes for faster protection. |
Big Data DSS | Support sequential read-ahead operations and “hot-band” data caching in a cost-effective way using SSHD vs. slower similar capacity size HDDs for Data warehouse, DSS and other analytic environments. |
Email, Text and Voice Messaging | Microsoft Exchange and other email journals, mailbox or object repositories can leverage faster read and write I/Os with more space capacity. |
OLTP, Database | Eliminate the need to short stroke HDDs to gain performance, offer more space capacity and IOP performance per device for tables, logs, journals, import/export and scratch, temporary ephemeral storage. Leverage random and sequential read acceleration to compliment server-side SSD-based read and write-thru caching. Utilize fast magnetic media for persistent data reducing wear and tear on more costly flash SSD storage devices. |
Server Virtualization | Fast disk storage for data stores and virtual disks supporting VMware vSphere/ESXi, Microsoft Hyper-V, KVM, Xen and others. Holding virtual machines such as VMware VMDKs, along with Hyper-V and other hypervisor virtual disks. Compliment virtual server read cache and I/O optimization using SSD as a cache with writes going to fast SSHD. For example VMware V5.5 Virtual SAN host disk groups use SSD as a read cache and can use SSHD as the magnetic disk for storing data while boosting performance without breaking the budget or adding complexity. Speaking of Virtual, as mentioned the various proof points were run using Windows systems that were VMware guests with the SSHD and other devices being Raw Device Mapped (RDM) SAS and SATA attached, read how to do that here. Hint: If you know about the VMware trick for making a HDD look like a SSD to vSphere/ESXi (refer to here and here) think outside the virtual box for a moment on some things you could do with SSHD in a VSAN environment among other things, for now, just sayin ;). |
Virtual Desktop Infrastructure (VDI) | SSHD can be used as high performance magnetic disk for storing linked clone images, applications and data. Leverage fast read to support read ahead or pre-fetch to compliment SSD based read cache solutions. Utilize fast writes to quickly store data enabling SSD-based read or write-thru cache solutions to be more effective. Reduce impact of boot, shutdown, and virus scan or maintenance storms while providing more space capacity. |
Table 1 Example application and workload scenarios benefiting from SSHDs
Test drive application proof points
Various workloads were run using Seagate Enterprise Turbo SSHD in the StorageIO lab environment across different real world like application workload scenarios. These include general storage I/O performance characteristics profiling (e.g. reads, writes, random, sequential or various IOP size) to understand how these devices compare to other HDD, HHDD and SSD storage devices in terms of IOPS, bandwidth and response time (latency). In addition to basic storage I/O profiling, the Enterprise Turbo SSHD was also used with various SQL database workloads including Transaction Processing Council (TPC); along with VMware server virtualization among others use case scenarios.
Note that in the following workload proof points a single drive was used meaning that using more drives in a server or storage system should yield better performance. This also means scaling would be bound by the constraints of a given configuration, server or storage system. These were also conducted using 6Gbps SAS with PCIe Gen 2 based servers and ongoing testing is confirming even better results with 12Gbs SAS, faster servers with PCIe Gen 3.
Copy (read and write) 80GB and 220GB file copies (time to copy entire file)
SQLserver TPC-B batch database updates
Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 500GB 3.5” 7.2K RPM HDD 3 Gbps SATA, 1TB 3.5” 7.2K RPM HDD 3 Gbps SATA. Workload generator and virtual clients ran on Windows 7 Ultimate. Microsoft SQL Server 2012 Database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (Intel x3490 2.93 GHz)), with LSI 9211 6Gbps SAS adapters with TPC-B (www.tpc.org) workloads. VM resided on separate data store from devices being tested. All devices being tested with SQL MDF were Raw Device Mapped (RDM) independent persistent with database log file (LDF) on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.
SQLserver TPC-E transactional workload
Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 300GB 2.5” Savio 10K RPM HDD 6 Gbps SAS, 1TB 3.5” 7.2K RPM HDD 6 Gbps SATA. Workload generator and virtual clients Windows 7 Ultimate. Microsoft SQL Server 2012 database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (E8400 2.99GHz), with LSI 9211 6Gbps SAS adapters with TPC-E (www.tpc.org) workloads. VM resided on separate SSD based data store from devices being tested (e.g., where MDF resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.
Microsoft Exchange workload
Test configuration: 2.5” Seagate 600 Pro 120GB (ST120FP0021 ) SSD 6 Gbps SATA, 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 2.5” Savio 146GB HDD 6 Gbps SAS, 3.5” Barracuda 500GB 7.2K RPM HDD 3 Gbps SATA. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft Small Business Server (SBS) 2011 Service Pack 1 64 bit, 8GB DRAM, One CPU (Intel X3490 2.93 GHz) LSI 9211 6 Gbps SAS adapter, JetStress 2010 (no other active workload during test intervals). All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device.
Read more about the above proof points along view data points and configuration information in the associated white paper found here (no registration required).
What this all means
Similar to flash-based SSD technologies the question is not if, rather when, where, why and how to deploy hybrid solutions such as SSHDs. If your applications and data infrastructures environment have the need for storage I/O speed without loss of space capacity and breaking your budget, SSD enabled devices like the Seagate Enterprise Turbo 600GB SSHD are in your future. You can learn more about enterprise class SSHD such as those from Seagate by visiting this link here.
Watch for extra workload proof points being performed including with 12Gbps SAS and faster servers using PCIe Gen 3.
Ok, nuff said.
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Data Storage Innovation Chat with SNIA Wayne Adams and David
Data Storage Innovation Chat with SNIA Wayne Adams and David
In this episode, SNIA Chairman Emeritus Wayne Adams and current Chairman David Dale join me in a conversation from the Data Storage Innovation Conference (DSICON) 2014 conference event.
SNIA Chairman David Dale opening remarks SNIA DSICON 2014
SNIA DSI Conference (DSICON), CDMI Conformance Testing and other updates
DSICON is a new event produced by SNIA targeted for IT professionals involved with data storage related topics, themes, technologies and tools spanning hardware, software, cloud, virtual and physical. In this conversation, we talk about the new DSI event, the diversity of new attendees who are attending their first SNIA event, along with other updates. Some of these updates include what is new with the SNIA Cloud Data Management Initiative (CDMI), Non Volatile Memory (think flash and SSD), SMIS, education and more. In addition to the DSICON event, SNIA also announced CDMI Cloud Interoperability Conformance Test Program is now available for cloud solution vendors and providers.
DSI, Santa Clara, CA (April 22, 2014)— The Storage Networking Industry Association (SNIA), today announced the launch of a Cloud Data Management Interface (CDMI) Conformance Test Program (CTP)that validates cloud products’ conformance to the ISO/IEC CDMI standard for cloud data interoperability(ISO catalog number ISO/IEC 17826:2012). Cloud solutions that pass the CDMI CTP offer cloud consumers assurance that the CDMI standard has been properly implemented and that data stored in any conformant implementation will be transportable to any other conformant implementation. |
Here is a perspective commentary quote that I issued which was included in the SNIA Press Release.
“Today, the cloud market is crowded with a slew of vendors offering different solutions for migration, data management and security, often leaving IT customers confused about the right solution for their requirements,” said Greg Schulz, founder of StorageIO, a storage technology advisory and consulting firm. “SNIA’s CDMI Conformance Test Program is a great step forward helping IT customers, VARs or others in the industry navigate their way through the fog of cloud interoperability requirements in a streamlined fashion, not to mention laying standard routes vendors will want to adopt going forward.” |
Check out the full SNIA CDMI press release announcement for the conformance testing here, as well as learn more about CDMI here.
Listen in to our podcast conversation here as we cover cloud, convergence, software defined and more about data storage.
Topics and themes discussed:
Check out SNIA and DSICON listen in to the conversation with David Dale and Wayne Adams here.
Ok, nuff said.
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Chat with Cash Coleman talking ClearDB, cloud database and Johnny Cash
Podcast with Cash Coleman talking ClearDB, cloud database and Johnny Cash
In this episode from the SNIA DSI 2014 event I am joined by Cashton Coleman (@Cash_Coleman).
Introducing Cashton (Cash) Coleman and ClearDB
Cashton (Cash) is a Software architect, product mason, family bonder, life builder, idea founder along with Founder & CEO of SuccessBricks, Inc., makers of ClearDB. ClearDB is a provider of MySQL database software tools for cloud and physical environments. In our conversation talk about ClearDB, what they do and whom they do it with including deployments in cloud’s as well as onsite. For example if you are using some of the Microsoft Azure cloud services using MySQL, you may already be using this technology. However, there is more to the story and discussion including how Cash got his name, how to speed up databases for little and big data among other topics.
If you are a database person, you will want to listen to what Cash has to say about boosting performance and getting more value out of your physical hardware or cloud services. On the other hand if you are a storage person, listen in to get some insight and ideas on to address database performance and resiliency. For others who just like to listen to new trends, technology talk, or hear about emerging companies to keep an eye on, you wont want to miss the podcast conversation.
Topics and themes discussed:
Check out ClearDB and listen in to the conversation with Cash podcast here.
Ok, nuff said.
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Lenovo TS140 Server and Storage I/O Review
Lenovo TS140 Server and Storage I/O Review
This is a review that looks at my recent hands on experiences in using a TS140 (Model MT-M 70A4 – 001RUS) pedestal (aka tower) server that the Lenovo folks sent to me to use for a month or so. The TS140 is one of the servers that Lenovo had prior to its acquisition of IBM x86 server business that you can read about here.
The Lenovo TS140 Experience
Lets start with the overall experience which was very easy and good. This includes going from initial answering some questions to get the process moving, agreeing to keep the equipment safe, secure, insured as well as not damaging anything (this was not a tear down and rip it apart into pieces trial).
Part of the process also involved answering some configuration related questions and shortly there after a large box from Lenovo arrived. Turns out it was a box (server hardware) inside of a Lenovo box, that was inside a slightly larger unmarked shipping box (see larger box in the background).
TS140 shipment undergoing initial security screen scan and sniff (all was ok)
TS140 with Keyboard and Mouse (Monitor not included)
One of the reasons I have a photo of the TS140 on a desk is that I initially put it in an office environment as Lenovo claimed it would be quiet enough to do so. I was not surprised and indeed the TS140 is quiet enough to be used where you would normally find a workstation or mini-tower. By being so quiet the TS140 is a good fit for environments that need a small or starter server that has to go into an office environment as opposed to a server or networking room. For those who are into mounting servers, there is the option for placing the TS140 on its side into a cabinet or rack.
TS140 with Windows Server 2012 Essentials
TS140 as tested
TS140 "Selfie" with 4 x 4GB DDR3 DIMM (16GB) and PCIe slots (empty)
16GB RAM (4 x 4GB DDR3 UDIMM, larger DIMMs are supported)
Windows Server 2012 Essentials
Intel Xeon E3-1225 v3 @3.2 Ghz quad (C226 chipset and TPM 1.2) vPRO/VT/EP capable
Intel GbE 1217-LM Network connection
280 watt power supply
Keyboard and mouse (no monitor)
Two 7.2K SATA HDDs (WD) configured as RAID 1 (100GB Lun)
Slot 1 PCIe G3 x16
Slot 2 PCIe G2 x1
Slot 3 PCIe G2 x16 (x4 electrical signal)
Slot 4 PCI (legacy)
Onboard 6GB SATA RAID 0/1/10/5
Onboard SATSA 3.0 (6Gbps) connectors (0-4), USB 3.0 and USB 2.0
Read more about what I did with the Lenovo TS140 in part II of my review along with what I liked, did not like and general comments here.
Ok, nuff said (for now)
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved