Some Windows Server Storage I/O related commands

Storage I/O trends

Some Windows Server Storage I/O related commands

The following are some commands and tools for Microsoft Windows environments that are useful for storage I/O activities (among others).

Microsoft Windows

Finding a Windows physical disk, SSD or storage system device name

So you may know and how to find out the more familiar Windows storage device (Solid State DeviceSSD, Hard Disk DrivesHDD among others) names such as A:, B:, C:, D: as what you can view from the Windows Explorer, Computer or Admin tools.

Windows storage devices

However what if you need to find out a physical name for raw (not mounted) and mounted devices for configuration? For example, if you have a tool that wants the physical name for your C: drive that might be \\.\PhysicalDrive0\?

No worries, use the command WMIC DISKDRIVE LIST BRIEF

WIndows physical device name

Need more detail about the devices beyond what is shown above?

Then use WMIC DISKDRIVE LIST or as in the above example, direct the output to a file with the results shown below (scroll to the left or right to see more detail information).

        Availability  BytesPerSector  Capabilities  CapabilityDescriptions                 CompressionMethod  ConfigManagerErrorCode  ConfigManagerUserConfig  DefaultBlockSize  Description  DeviceID            ErrorCleared  ErrorDescription  ErrorMethodology  Index  InstallDate  InterfaceType  LastErrorCode  Manufacturer            MaxBlockSize  MaxMediaSize  MediaLoaded  MediaType              MinBlockSize  Model                                  Name                NeedsCleaning  NumberOfMediaSupported  Partitions  PNPDeviceID                                                  PowerManagementCapabilities  PowerManagementSupported  SCSIBus  SCSILogicalUnit  SCSIPort  SCSITargetId  SectorsPerTrack  Signature   Size           Status  StatusInfo  SystemName  TotalCylinders  TotalHeads  TotalSectors  TotalTracks  TracksPerCylinder  
              512             {3, 4}        {"Random Access", "Supports Writing"}                     0                       FALSE                                      Disk drive   \\.\PHYSICALDRIVE2                                                    2                   SCSI                          (Standard disk drives)                              TRUE         Fixed hard disk media                ATA ST3000DM001-1CH1 SCSI Disk Device  \\.\PHYSICALDRIVE2                                         0           SCSI\DISK&VEN_ATA&PROD_ST3000DM001-1CH1\5&3626375C&0&000600                                                         0        0                3         6             63               0           3000590369280  OK                  DBIOTEST    364801          255         5860528065    93024255     255                
              512             {3, 4}        {"Random Access", "Supports Writing"}                     0                       FALSE                                      Disk drive   \\.\PHYSICALDRIVE3                                                    3                   SCSI                          (Standard disk drives)                              TRUE         Fixed hard disk media                SEAGATE ST600MP0034 SCSI Disk Device   \\.\PHYSICALDRIVE3                                         0           SCSI\DISK&VEN_SEAGATE&PROD_ST600MP0034\5&3626375C&0&000A00                                                          0        0                3         10            63                           600124654080   OK                  DBIOTEST    72961           255         1172118465    18605055     255                
              512             {3, 4}        {"Random Access", "Supports Writing"}                     0                       FALSE                                      Disk drive   \\.\PHYSICALDRIVE4                                                    4                   SCSI                          (Standard disk drives)                              TRUE         Fixed hard disk media                SEAGATE ST600MX0004 SCSI Disk Device   \\.\PHYSICALDRIVE4                                         0           SCSI\DISK&VEN_SEAGATE&PROD_ST600MX0004\5&3626375C&0&000C00                                                          0        0                3         12            63                           600124654080   OK                  DBIOTEST    72961           255         1172118465    18605055     255                
              512             {3, 4}        {"Random Access", "Supports Writing"}                     0                       FALSE                                      Disk drive   \\.\PHYSICALDRIVE1                                                    1                   SCSI                          (Standard disk drives)                              TRUE         Fixed hard disk media                SEAGATE ST9300603SS SCSI Disk Device   \\.\PHYSICALDRIVE1                                         0           SCSI\DISK&VEN_SEAGATE&PROD_ST9300603SS\5&3626375C&0&000400                                                          0        0                3         4             63                           299992412160   OK                  DBIOTEST    36472           255         585922680     9300360      255                
              512             {3, 4}        {"Random Access", "Supports Writing"}                     0                       FALSE                                      Disk drive   \\.\PHYSICALDRIVE0                                                    0                   SCSI                          (Standard disk drives)                              TRUE         Fixed hard disk media                VMware Virtual disk SCSI Disk Device   \\.\PHYSICALDRIVE0                                         2           SCSI\DISK&VEN_VMWARE&PROD_VIRTUAL_DISK\5&1982005&1&000000                                                           0        0                2         0             63               -873641784  64420392960    OK                  DBIOTEST    7832            255         125821080     1997160      255    

Remembering (or learning) Xcopy

Some of you might be familiar with Xcopy and if not, it is a handy tool for copying files, folders and directories to local as well as networked storage. Some handy Xcopy command switches include:

/j = use un-buffered IO for large files
/y = suppress prompting
/c = continue if error
/E = copy sub directories
/H = copy hidden files
/Q = quiet mode (don’t list files being copied)

In the following example the content of the Videos folder and its sub-directories (83.5GB) are copied to another destination. Note the Time /T command that is also shown which is useful for timing how long the copy takes (e.g. subtract start-time from end-time and you have elapsed time). In this example 83.5GB are copied from one place to another on the same SSD device and using the results of the Time /T command the elapsed time was about 12 minutes.

Windows SSD TRIM
Xcopy command example

Diskpart, don’t be scared, however be careful

Ever have a Windows storage device or system that failed to boot, or a problem with a partition, volume or other issue?

How about running into a situation where you are not able to format a device that you know and can confirm is ok to erase, yet you get a message that the volume is write protected or read only?

Diskpart is handy, powerful and potentially dangerous tool if you are not careful as you could mistakenly drop a good volume or partition (e.g. the importance of having good backups). However Diskpart can be used to help repair storage devices that have boot problems, or for clearing read only attributes among other tasks. If you are prefer GUI interfaces, many of the Diskpart functions can also be done via Disk Management interface (e.g. Control Panel -> All Control Panel Items -> Administrative Tools -> Computer Management -> Storage -> Disk Management ). Note that Diskpart to do certain functions will need to be run as Administrator.

windows diskpart

In the above example the LIST DISK command shows what disks are present (on-line or off-line) which means that you may see devices here that do not show up elsewhere. Also shown is selecting a disk and then listing partitions, selecting a partition and showing attributes. The Attribute command can be used for clearing Read Only modes should a partition become write protected.

Hint, ever have a device that was once had VMware installed on it, then you move it to Windows and try to reformat for use and get a Read Only error? If so, you will want to have a look at Diskpart and the Attribute commands. However BE CAREFULL and pay attention which disk, partition and volumes you are working with as you can easily cause a problem that would result in testing how good your backups are.

Is SATA SSD TRIM Enabled?

If you have a SATA SSD the TRIM command is a form of garbage collection that is supported with Windows 7 (SAS drives use the SCSI UNMAP). Not sure if your system has TRIM enabled? Try the following command as administrator. Note that if you see a result of "0" then TRIM is enabled while a value of "1" means that it is disabled for your system.

Windows SSD TRIM

Want to learn more about TRIM, check out this piece from Intel as well as this Microsoft Windows item.

Having issues with collecting CPU and performance statistics?

Have an issue or problem collecting your system statistics, or when running a benchmark, workload generation tool such as vdbench and getting an "Unable to obtain CPU statistics"?

Try the Lodctr /R command (as administrator), however read this Microsoft Tip first to learn more.

Windows Lodctr /R

Sdelete and drive erase

Like its name implies, if you do not have this tool, you can download it here from Microsoft to not only delete files, folders, as well as write "0" patterns across a disk to secure erase it. You can specify the number of times you want to run the write "0" patterns across a disk to meet your erasure requirements.

There is also another use for Sdelete which is if you need or want to pre-condition a SSD or other device such as for testing, you can run a pre-conditioning pass using Sdelete.

Some command options include -p #n where "n" is the number of times to run, -s recursive to process sub-directories, -z to write "0" or zero out the space on the device, -c for clean, -a to process read-only attributes. Learn more and get your copy of Sdelete from Microsoft here.

Rufus, Seatools, Samsung Disk Magician and Cyberduck

A handy tool available from Seagate (may only work with Seagate and their partner devices) is SeaTools that can give drive information, health and status as well as perform various tests including SMART.

Seagate Seatools
Seagate Seatools example

Different HDD and SSD as well as storage system vendors give tools for configuration, monitoring, management and in some cases data movement with their solutions. Samsung SSD Magician is a tool I have installed for managing my SSDs (830 and 840 Pros) that has features for updating firmware, drive health as well as performance optimization. Other hand tools include the Samsung copy tool based on Clonix as Acronis among other clone or data migration utilities (more on those in a future post).

Samsung SSD Magician
Samsung SSD Magician

While the Microsoft WIndows USB Tool is handy for dealing with Microsoft ISO, however for creating USB’s with ISO’s such as for installing VMware or Linux on bare metal systems, Rufus is a handy tool to have in the tool-box.

Rufus ISO to USB tool

Another useful tool that functions as an SSH and FTP utility is Cyberduck that also supports access to Amazon S3 among other cloud services.

There are many other tools for server, storage I/O and other activities on WIndows, not to mention other platforms, however hopefully you find the above useful.

How about it, what’s your favorite Windows server, storage I/O tools and commands?

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Small Medium Business (SMB) IT continues to gain respect, what about SOHO?

Storage I/O trends

Blog post: Small Medium Business (SMB) IT continues to gain respect, what about SOHO?

Note that in Information Technology (IT) conversations there are multiple meanings for SMB including Server Message Block aka Microsoft Windows CIFS (Common Internet File System) along with its SAMBA implementation, however for this piece the context is Small Medium Business.

A decade or so ago, mention SMB (Small Medium Business) to many vendors, particular those who were either established or focused on the big game enterprise space and you might have gotten a condescending look or answer if not worse.

In other words, a decade ago the SMB did not get much respect from some vendors and those who followed or covered them.

Fast forward to today and many of those same vendors along with their pundits and media followers have now gotten their SMB grove, lingo, swagger or social media footsteps, granted for some that might be at the higher end of SMB also known as SME (Small Medium Enterprise).

Today in general the SMB is finally getting respect and in some circles its down right cool and trendy vs. being perceived as old school, stodgy large enterprise. Likewise the Remote Office Branch Office (ROBO) gained more awareness and coverage a few years back which while the ROBO buzz has subsided, the market and opportunities are certainly there.

What about Small Office Home Office (SOHO) today?

I assert that SOHO today is getting the same lack of respect that SMB in general received a decade ago.

IMHO the SOHO environment and market today is being treated with a similar lack of respect that the larger SMB received a decade ago.

Granted there are some vendors and their followings who are seeing the value and opportunity, not to mention market size potential of expanding their portfolios, not to mention routes to markets to meet their different needs of the SOHO.

relative enterprise sme smb soho positioning

What is the SOHO market or environment

One of the challenges with SMB, SOHO among other classifications are just that, the classifications.

Some classificaitons are based on number of employees, others on number of servers or workstations, while others are based on revenue or even physical location.

Meanwhile some are based on types of products, technologies or tools while others are tied to IT or general technology spending.

Some confuse the SOHO space with the consumer market space or sector which should not be a surprise if you view market segments as enterprise, SMB and consumer. However if you take a more pragmatic approach, between true consumer and SMB space, there lies the SOHO space. For some the definitions of what is consumer, SOHO, SMB, SME and enterprise (among others) will be based on number of employees, or revenue amount. Yet for others the categories may be tied to IT spending (e.g. price bands), number of workstations, servers, storage space capacity or some other metric. On the other hand some definitions of what is consumer vs. SOHO vs. SMB vs. SME or enterprise will be based on product capabilities, size, feature function and cost among other attributes.

Storage I/O trends

Understanding the SOHO

Keep in mind that SOHO can also overlap with Remote Office Branch Office (ROBO), not to mention blend with high-end consumer (prosumer) or lower bounds of SMB.

Part of the challenge (or problem) is that many confuse the Home Office or HO aspect of SOHO as being consumer.

Likewise many also confuse the Small Office or SO part of SOHO as being just the small home office or the virtual office of a mobile worker.

The reality is that just as the SMB space has expanded, there is also a growing area just above where consumer markets exist and where many place the lower-end of SMB (e.g. the bottom limits of where the solutions fit).

First keep in mind that many put too much focus and mistakenly believe that the HO or Home Office part of SOHO means that this is just a consumer focused space.

The reality is that while the HO gets included as part of SOHO, there is also the SO or Small Office which is actually the low-end of the SMB space.

Keep in mind that there are more:
SOHO than SMB
SMB than SME
SME than enterprise
F500 (Fortune 500) than F100
F100 than F10 and so forth.

Here is my point

SOHO does not have to be the Rodney Dangerfield of IT (e.g. gets no respect)!

If you jumped on the SMB bandwagon a decade ago, start paying attention to what’s going on with the SOHO or lower-end SMB sector. The reasons are simple, just as SMBs can grow up to be larger SMBs or SME or enterprise, SOHOs can also evolve to become SMBs either in business size, or in IT and data infrastructure needs, requirements.

For those who prefer (at least for now) look down upon or ignore the SOHO similar to what was done with SMB before converting to SMBism, do so at your own risk.

However let me be clear, this does not mean ignore or shift focus and thus disrupt or lose coverage of other areas, rather, extend, expand and at least become aware of what is going on in the SOHO space.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

November 2013 Server and StorageIO Update Newsletter & AWS reinvent info


November 2013 Server and StorageIO Update Newsletter & AWS reinvent info

Welcome to the November 2013 edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics. Fall (here in North America) has been busy with in-person, on-line live and virtual events along with various client projects, research, time in the StorageIO cloud, virtual and physical lab test driving, validating and doing proof of concept research among other tasks. Check out the industry trends perspectives articles, comments and blog posts below that covers some activity over the past month.

Last week I had the chance to attend the second annual AWS re:Invent event in Las Vegas, see my comments, perspectives along with a summary of announcements from that conference below.

Watch for future posts, commentary, perspectives and other information down the road (and in the not so distant future) pertaining to information and data infrastructure topics, themes and trends across cloud, virtual, legacy server, storage, networking, hardware and software. Also check out our backup, restore, BC, DR and archiving (Under the resources section on StorageIO.com) for various presentation, book chapter downloads and other content.

Enjoy this edition of the StorageIO Update newsletter.

Ok, nuff said (for now)

Cheers gs

StorageIO Industry Trends and Perspectives

Industry trends: Amazon Web Services (AWS) re:Invent

Last week I attended the AWS re:Invent event in Las Vegas. This was the second annual AWS re:Invent conference which while having an AWS and cloud theme, it is also what I would describe as a data infrastructure event.

As a data infrastructure event AWS re:Invent spans traditional legacy IT and applications to newly invented, re-written, re-hosted or re-platformed ones from existing and new organizations. By this I mean a mix of traditional IT or enterprise people as well as cloud and virtual geek types (said with affection and all due respect of course) across server (operating system, software and tools), storage (primary, secondary, archive and tools), networking, security, development tools, applications and architecture.

That also means management from application and data protection spanning High Availability (HA), Business Continuance (BC), Disaster Recovery (DR), backup/restore, archiving, security, performance and capacity planning, service management among other related themes across public, private, hybrid and community cloud environments or paradigms. Hmm, I think I know of a book that covers the above and other related topic themes, trends, technologies and best practices called Cloud and Virtual Data Storage Networking (CRC Press) available via Amazon.com in print and Kindle (among other) versions.

During the event AWS announced enhanced and new services including:

  • WorkSpaces (Virtual Desktop Infrastructure – VDI) announced as a new service for cloud based desktops across various client devices including laptops, Kindle Fire, iPad and Android tablets using PCoIP.
  • Kinesis which is a managed service for real-time processing of streaming (e.g. Big) data at scale including ability to collect and process hundreds of GBytes of data per second across hundreds of thousands of data sources. On top of Kinesis you can build your big data applications or conduct analysis to give real-time key performance indicator dashboards, exception and alarm or event notification and other informed decision-making activity.
  • EC2 C3 instances provide Intel Xeon E5 processors and Solid State Device (SSD) based direct attached storage (DAS) like functionality vs. EBS provisioned IOPs for cost-effective storage I/O performance and compute capabilities.
  • Another EC2 enhancement are G2 instance that leverage high performance NVIDIA GRID GPU with 1,536 parallel processing cores. This new instance is well suited for 3D graphics, rendering, streaming video and other related applications that need large-scale parallel or high performance compute (HPC) also known as high productivity compute.
  • Redshift (cloud data warehouse) now supports cross region snapshots for HA, BC and DR purposes.
  • CloudTrail records AWS API calls made via the management console for analytics and logging of API activity.
  • Beta of Trusted Advisor dashboard with cost optimization saving estimates including EBS and provisioned IOPs
  • Relational Database Service (RDS) support for PostgresSQL including multi-AZ deployment.
  • Ability to discover and launch various software from AWS Marketplace via the EC2 Console. The AWS Marketplace for those not familiar with it is a catalog of various software or application titles (over 800 products across 24 categories) including free and commercial licensed solutions that include SAP, Citrix, Lotus Notes/Domino among many others.
  • AppStream is a low latency (STX protocol based) service for streaming resource (e.g. compute, storage or memory) intensive applications and games from AWS cloud to various clients, desktops or mobile devices. This means that the resource intensive functionality can be shifted to the cloud, while providing a low latency (e.g. fast) user experience off-loading the client from having to support increased compute, memory or storage capabilities. Key to AppStream is the ability to stream data in a low-latency manner including over networks normally not suited for high quality or bandwidth intensive applications. IMHO AppStream while focused initially on mobile app’s and gaming, being a bit streaming technology has the potential to be used for other similar functions that can leverage download speed improvements.
  • When I asked an AWS person if or what role AppStream might have or related to WorkSpaces their only response was a large smile and no comment. Does this mean WorkSpaces leverages AppStream? Candidly I don’t know, however if you look deeper into AppStream and expand your horizons, see what you can think up in terms of innovation. Updated 11/21/13 AWS has provided clarification that WorkSpaces is based on PCoIP while AppStream uses the STX protocols.

    Check out AWS Sr. VP Andy Jassy keynote presentation here.

Overall I found the AWS re:Invent event to be a good conference spanning many aspects and areas of focus which means I will be putting it on my must attend list for 2014.

StorageIO Industry Trends and PerspectivesIndustry trends tips, commentary, articles and blog posts
What is being seen, heard and talked about while out and about

The following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes.

Storage I/O posts

Recent industry trends, perspectives and commentary by StorageIO Greg Schulz in various venues:

NetworkComputing: Comments on Software-Defined Storage Startups Win Funding

Digistor: Comments on SSD and flash storage
InfoStor: Comments on data backup and virtualization software

ITbusinessEdge: Comments on flash SSD and hybrid storage environments

NetworkComputing: Comments on Hybrid Storage Startup Nimble Storage Files For IPO

InfoStor: Comments on EMC’s Light to Speed: Flash, VNX, and Software-Defined

InfoStor: Data Backup Virtualization Software: Four Solutions

ODSI: Q&A With Greg Schulz – A Quick Roundup of Data Storage Industry

Recent StorageIO Tips and Articles in various venues:

FedTechMagazine: 3 Tips for Maximizing Tiered Hypervisors
InfoStor:
RAID Remains Relevant, Really!

Storage I/O trends

Recent StorageIO blog post:

EMC announces XtremIO General Availability (Part I) – Announcement analysis of the all flash SSD storage system
Part II: EMC announces XtremIO General Availability, speeds and feeds – Part two of two part series with analysis
What does gaining industry traction or adoption mean too you? – There is a difference between buzz and deployment
Fall 2013 (September and October) StorageIO Update Newsletter – In case you missed the fall edition, here it is

StorageIO Industry Trends and Perspectives

Check out our objectstoragecenter.com page where you will find a growing collection of information and links on cloud and object storage themes, technologies and trends.

Server and StorageIO seminars, conferences, web cats, events, activities StorageIO activities (out and about)

Seminars, symposium, conferences, webinars
Live in person and recorded recent and upcoming events

While 2013 is winding down, the StorageIO calendar continues to evolve, here are some recent and upcoming activities.

December 11, 2013 Backup.UData Protection for Cloud 201Backup.U
Google+ hangout
December 3, 2013 Backup.UData Protection for Cloud 101Backup.U
Online Webinar
November 19, 2013 Backup.UData Protection for Virtualization 201Backup.U
Google+ hangout
November 12-13, 2013AWS re:InventAWS re:Invent eventLas Vegas, NV
November 5, 2013 Backup.UData Protection for Virtualization 101Backup.U
Online Webinar
October 22, 2013 Backup.UData Protection for Applications 201Backup.U
Google+ hangout

Click here to view other upcoming along with earlier event activities. Watch for more 2013 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, big data, little data, cloud and object storage, performance and management trends among others.

Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events.

If you missed the Fall (September and October) 2013 StorageIO update newsletter, click here to view that and other previous editions as HTML or PDF versions. Subscribe to this newsletter (and pass it along)

and click here to subscribe to this news letter. View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter

Ok, nuff said (for now).
Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved    

Part II: EMC announces XtremIO General Availability, speeds and feeds

Storage I/O trends

XtremIO flash SSD more than storage I/O speed

Following up part I of this two-part series, here are more more details, insights and perspectives about EMC XtremIO and it’s generally availability that were announced today.

XtremIO the basics

  • All flash Solid State Device (SSD) based solution
  • Cluster of up to four X-Brick nodes today
  • X-Bricks available in 10TB increments today, 20TB in January 2014
  • 25 eMLC SSD drives per X-Brick with redundant dual processor controllers
  • Provides server-side iSCSI and Fibre Channel block attachment
  • Integrated data footprint reduction (DFR) including global dedupe and thin provisioning
  • Designed for extending duty cycle, minimizing wear of SSD
  • Removes need for dedicated hot spare drives
  • Capable of sustained performance and availability with multiple drive failure
  • Only unique data blocks are saved, others tracked via in-memory meta data pointers
  • Reduces overhead of data protection vs. traditional small RAID 5 or RAID 6 configurations
  • Eliminates overhead of back-end functions performance impact on applications
  • Deterministic  storage I/O performance (IOPs, latency, bandwidth) over life of system

When would you use XtremIO vs. another storage system?

If you need all enterprise like data services including thin provisioning, dedupe, resiliency with deterministic performance on an all-flash system with raw capacity from 10-40TB (today) then XtremIO could be a good fit. On the other hand, if you need a mix of SSD based storage I/O performance (IOPS, latency or bandwidth) along with some HDD based space capacity, then a hybrid or traditional storage system could be the solution. Then there are hybrid scenarios where a hybrid storage system, array or appliance (mix of SSD and HDD) are used for most of the applications and data, with an XtremIO handling more tasks that are demanding.

How does XtremIO compare to others?

EMC with XtremIO is taking a different approach than some of their competitors whose model is to compare their faster flash-based solutions vs. traditional mid-market and enterprise arrays, appliances or storage systems on a storage I/O IOP performance basis. With XtremIO there is improved performance measured in IOPs or database transactions among other metrics that matter. However there is also an emphasis on consistent, predictable, quality of service (QoS) or what is known as deterministic storage I/O performance basis. This means both higher IOPs with lower latency while doing normal workload along with background data services (snapshots, data footprint reduction, etc).

Some of the competitors focus on how many IOPs or work they can do, however without context or showing impact to applications when back-ground tasks or other data services are in use. Other differences include how cluster nodes are interconnected (for scale out solutions) such as use of Ethernet and IP-based networks vs dedicated InfiniBand or PCIe fabrics. Host server attachment will also differ as some are only iSCSI or Fibre Channel block, or NAS file, or give a mix of different protocols and interfaces.

An industry trend however is to expand beyond the flash SSD need for speed focus by adding context along with QoS, deterministic behavior and addition of data services including snapshots, local and remote replication, multi-tenancy, metering and metrics, security among other items.

Storage I/O trends

Who or what are XtremIO competition?

To some degree vendors who only have PCIe flash SSD cards might place themselves as the alternative to all SSD or hybrid mixed SSD and HDD based solutions. FusionIO used to take that approach until they acquired NexGen (a storage system) and now have taken a broader more solution balanced approach of use the applicable tool for the task or application at hand.

Other competitors include the all SSD based storage arrays, systems or appliance vendors which includes legacy existing as well as startups vendors that include among others IBM who bought TMS (flashsystems), NetApp (EF540), Solidfire, Pure, Violin (who did a recent IPO) and Whiptail (bought by Cisco).  Then there are the hybrid which is a long list including Cloudbyte (software), Dell, EMCs other products, HDS, HP, IBM, NetApp, Nexenta (Software), Nimble, Nutanix, Oracle, Simplivity and Tintri among others.

What’s new with this XtremIO announcement

10TB X-Bricks enable 10 to 40TB (physical space capacity) per cluster (available on 11/19/13). 20TB X-Bricks (larger capacity drives) will double the space capacity in January 2014. If you are doing the math, that means either a single brick (dual controller) system, or up to four bricks (nodes, each with dual controllers) configurations. Common across all system configurations are data features such as thin provisioning, inline data footprint reduction (e.g. dedupe) and XtremIO Data Protection (XDP).

What does XtremIO look like?

XtremIO consists of up to four nodes (today) based on what EMC calls X-Bricks.
EMC XtremIO X-Brick
25 SSD drive X-Brick

Each 4U X-Brick has 25 eMLC SSD drives in a standard EMC 2U DAE (disk enclosure) like those used with the VNX and VMAX for SSD and Hard Disk Drives (HDD). In addition to the 2U drive shelve, there are a pair of 1U storage processors (e.g. controllers) that give redundancy and shared access to the storage shelve.

XtremIO Architecture
XtremIO X-Brick block diagram

XtremIO storage processors (controllers) and drive shelve block diagram. Each X-Brick and their storage processors or controllers communicate with each other and other X-Bricks via a dedicated InfiniBand using Remote Direct Memory Access (RDMA) fabric for memory to memory data transfers. The controllers or storage processors (two per X-Brick) each have dual processors with eight cores for compute, along with 256GB of DRAM memory. Part of each controllers DRAM memory is set aside as a mirror its partner or peer and vise versa with access being over the InfiniBand fabric.

XtremIO fabric
XtremIO X-Brick four node fabric cluster or instance

How XtremIO works

Servers access XtremIO X-Bricks using iSCSI and Fibre Channel for block access. A responding X-Brick node handles the storage I/O request and in the case of a write updates other nodes. In the case of a write, the handling node or controller (aka storage processor) checks its meta data map in memory to see if the data is new and unique. If so, the data gets saved to SSD along with meta data information updated across all nodes. Note that data gets ingested and chunked or sharded into 4KB blocks. So for example if a 32KB storage I/O request from the server arrives, that is broken (e.g. chunk or shard) into 8 4KB pieces each with a mathematical unique fingerprint created. This fingerprint is compared to what is known in the in memory meta data tables (this is a hexadecimal number compare so a quick operation). Based on the comparisons if unique the data is saved and pointers created, if already exists, then pointers are updated.

In addition to determining if unique data, the fingerprint is also used for generate a balanced data dispersal plan across the nodes and SSD devices. Thus there is the benefit of reducing duplicate data during ingestion, while also reducing back-end IOs within the XtremIO storage system. Another byproduct is the reduction in time spent on garbage collection or other background tasks commonly associated with SSD and other storage systems.

Meta data is kept in memory with a persistent copied written to reserved area on the flash SSD drives (think of as a vault area) to support and keep system state and consistency. In between data consistency points the meta data is kept in a log journal like how a database handles log writes. What’s different from a typical database is that XtremIO XIOS platform software does these consistency point writes for persistence on a granularity of seconds vs. hours or minutes.

Storage I/O trends

What about rumor that XtremIO can only do 4KB IOPs?

Does this mean that the smallest storage I/O or IOP that XtremIO can do is 4GB?

That is a rumor or some fud I have heard floated by a competitor (or two or three) that assumes if only 4KB internal chunk or shard being used for processing, that must mean no IOPs smaller than 4KB from a server.

XtremIO can do storage I/O IOP sizes of 512 bytes (e.g. the standard block size) as do other systems. Note that the standard server storage I/O block or IO size is 512 bytes or multiples of that unless the new 4KB advanced format (AF) block size being used which based on my conversations with EMC, AF is not supported, yet. (Updated 11/15/13 EMC has indicated that host (front-end) 4K AF support, along with 512 byte emulation modes are available now with XIOS). Also keep in mind that since XtremIO XIOS internally is working with 4KB chunks or shards, that is a stepping stone for being able to eventually leverage back-end AF drive support in the future should EMC decide to do so (Updated 11/15/13 Waiting for confirmation from EMC about if back-end AF support is now enabled or not, will give more clarity as it is recieved).

What else is EMC doing with XtremIO?

  • VCE Vblock XtremIO systems for SAP HANA (and other databases) in memory databases along with VDI optimized solutions.
  • VPLEX and XtremIO for extended distance local, metro and wide area HA, BC and DR.
  • EMC PowerPath XtremIO storage I/O path optimization and resiliency.
  • Secure Remote Support (aka phone home) and auto support integration.

Boosting your available software license minutes (ASLM) with SSD

Another use of SSD has been in the past the opportunity to make better use of servers stretching their usefulness or delaying purchase of new ones by improving their effective use to do more work. In the past this technique of using SSDs to delay a server or CPU upgrade was used when systems when hardware was more expensive, or during the dot com bubble to fill surge demand gaps.  This has the added benefit of stretching database and other expensive software licenses to go further or do more work. The less time servers spend waiting for IOP’s means more time for doing useful work and bringing value of the software license. Otoh, the more time spent waiting is lot available software minutes which is cost overhead.

Think of available software licence minutes (ASLM) in terms of available software license minutes where if doing useful work your software is providing value. On the other hand if those minutes are not used for useful work (e.g. spent waiting or lost due to CPU or server or IO wait, then they are lost). This is like airlines and available seat miles (ASM) metric where if left empty it’s a lost opportunity, however if used, then value, not to mention if yield management applied to price that seat differently. To make up for that loss many organizations have to add extra servers and thus more software licensing costs.

Storage I/O trends

Can we get a side of context with them metrics?

EMC along with some other vendors are starting to give more context with their storage I/O performance metrics that matter than simple IOP’s or Hero Marketing Metrics. However context extends beyond performance to also availability and space capacity which means data protection overhead. As an example, EMC claims 25% for RAID 5 and 20% for RAID 6 or 30% for RAID 5/RAID 6 combo where a 25 drive (SSD) XDP has a 8% overhead. However this assumes a 4+1 (5 drive) RAID , not apples to apples comparison on a space overhead basis. For example a 25 drive RAID 5 (24+1) would have around an 4% parity protection space overhead or a RAID 6 (23+2) about 8%.

Granted while the space protection overhead might be more apples to apples with the earlier examples to XDP, there are other differences. For example solutions such as XDP can be more tolerant to multiple drive failures with faster rebuilds than some of the standard or basic RAID implementations. Thus more context and clarity would be helpful.

StorageIO would like see vendors including EMC along with startups who give data protection space overhead comparisons without context to do so (and applaud those who provide context). This means providing the context for data protection space overhead comparisons similar to performance metrics that matter. For example simply state with an asterisk or footnote comparing a 4+1 RAID 5 vs. a 25 drive erasure or forward error correction or dispersal or XDP or wide stripe RAID for that matter (e.g. can we get a side of context). Note this is in no way unique to EMC and in fact quite common with many of the smaller startups as well as established vendors.

General comments

My laundry list of items which for now would be nice to have’s, however for you might be need to have would include native replication (today leverages Recover Point), Advanced Format (4KB) support for servers (Updated 11/15/13 Per above, EMC has confirmed that host/server-side (front-end) AF along with 512 byte emulation modes exist today), as well as SSD based drives, DIF (Data Integrity Feature), and Microsoft ODX among others. While 12Gb SAS server to X-Brick attachment for small in the cabinet connectivity might be nice for some, more practical on a go forward basis would be 40GbE support.

Now let us see what EMC does with XtremIO and how it competes in the market. One indicator to watch in the industry and market of the impact or presence of EMC XtremIO is the amount of fud and mud that will be tossed around. Perhaps time to make a big bowl of popcorn, sit back and enjoy the show…

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Some fall 2013 AWS cloud storage and compute enhancements

Storage I/O trends

Some fall 2013 AWS cloud storage and compute enhancements

I just received via Email the October Amazon Web Services (AWS) Newsletter in advance of the re:Invent event next week in Las Vegas (yes I will be attending).

AWS October newsletter and enhancement updates

What this means

AWS is arguably the largest of the public cloud services with a diverse set of services and options across multiple geographic regions to meet different customer needs. As such it is not surprising to see AWS continue to expand their service offerings expanding their portfolio both in terms of features, functionalities along with extending their presences in different geographies.

Lets see what else AWS announces next week in Las Vegas at their 2013 re:Invent event.

Click here to view the current October 2013 AWS newsletter. You can view (and signup for) earlier AWS newsletters here, and while you are at it, view the current and recent StorageIO Update newsletters here.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Seagate Kinetic Cloud and Object Storage I/O platform (and Ethernet HDD)

Storage I/O trends

Seagate Kinetic Cloud and Object Storage I/O platform

Seagate announced today their Kinetic platform and drive designed for use by object API accessed storage including for cloud deployments. The Kinetic platform includes Hard Disk Drives (HDD) that feature 1Gb Ethernet (1 GbE) attached devices that speak object access API or what Seagate refers to as a key / value.

Seagate Kinetic architecture

What is being announced with Seagate Kinetic Cloud and Object (Ethernet HDD) Storage?

  • Kinetic Open Storage Platform – Ethernet drives, key / value (object access) API, partner software
  • Software developer’s kits (SDK) – Developer tools, documentation, drive simulator, code libraries, code samples including for SwiftStack and Riak.
  • Partner ecosystem

What is Kinetic?

While it has 1 GbE ports, do not expect to be able to use those for iSCSI or NAS including NFS, CIFS or other standard access methods. Being Ethernet based, the Kinetic drive only supports the key value object access API. What this means is that applications, cloud or object stacks, key value and NoSQL data repositories, or other software that adopt the API can communicate directly using object access.

Seagate Kinetic storage

Internal, the HDD functions as a normal drive would store and accessing data, the object access function and translation layer shifts from being in an Object Storage Device (OSD) server node to inside the HDD. The Kinetic drive takes on the key value API personality over 1 GbE ports instead of traditional Logical Block Addressing (LBA) and Logical Block Number (LBN) access using 3g, 6g or emerging 12g SAS or SATA interfaces. Instead Kinetic drives respond to object access (aka what Seagate calls key / value) API commands such as Get, Put among others. Learn more about object storage, access and clouds at www.objectstoragecenter.com.

Storage I/O trends

Some questions and comments

Is this the same as what was attempted almost a decade ago now with the T10 OSD drives?

Seagate claims no.

What is different this time around with Seagate doing a drive that to some may vaguely resemble the predecessor failed T10 OSD approach?

Industry support for object access and API development have progressed from an era of build it and they will come thinking, to now where the drives are adapted to support current cloud, object and key value software deployment.

Wont 1GbE ports be too slow vs. 12g or 6g or even 3g SAS and SATA ports?

Keep in mind those would be apples to oranges comparisons based on the protocols and types of activity being handled. Kinetic types of devices initially will be used for large data intensive applications where emphasis is on storing or retrieving large amounts of information, vs. low latency transactional. Also, keep in mind that one of the design premises is to keep cost low, spread the work over many nodes, devices to meet those goals while relying on server-side caching tools.

Storage I/O trends

Does this mean that the HDD is actually software defined?

Seagate or other HDD manufactures have not yet noticed the software defined marketing (SDM) bandwagon. They could join the software defined fun (SDF) and talk about a software defined disk (SDD) or software defined HDD (SDHDD) however let us leave that alone for now.

The reality is that there is far more software that exists in a typical HDD than what is realized. Sure some of that is packaged inside ASICs (Application Specific Integrated Circuits) or running as firmware that can be updated. However, there is a lot of software running in a HDD hence the need for power yet energy-efficient processors found in those devices. On a drive per drive basis, you may see a Kinetic device consume more energy vs. other equivalence HDDs due to the increase in processing (compute) needed to run the extra software. However that also represents an off-load of some work from servers enabling them to be smaller or do more work.

Are these drives for everybody?

It depends on if your application, environment, platform and technology can leverage them or not. This means if you view the world only through what is new or emerging then these drives may be for all of those environments, while other environments will continue to leverage different drive options.

Object storage access

Does this mean that block storage access is now dead?

Not quite, after all there is still some block activity involved, it is just that they have been further abstracted. On the other hand, many applications, systems or environments still rely on block as well as file based access.

What about OpenStack, Ceph, Cassandra, Mongo, Hbase and other support?

Seagate has indicated those and others are targeted to be included in the ecosystem.

Seagate needs to be careful balancing their story and message with Kinetic to play to and support those focused on the new and emerging, while also addressing their bread and butter legacy markets. The balancing act is communicating options, flexibility to choose and adopt the right technology for the task without being scared of the future, or clinging to the past, not to mention throwing the baby out with the bath water in exchange for something new.

For those looking to do object storage systems, or cloud and other scale based solutions, Kinetic represents a new tool to do your due diligence and learn more about.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: Has Nirvanix shutdown caused cloud confidence concerns?

Storage I/O trends

Cloud conversations: Has Nirvanix shutdown caused cloud confidence concerns?

Recently seven plus year old cloud storage startup Nirvanix announced that they were finally shutting down and that customers should move their data.

nirvanix customer message

Nirvanix has also posted an announcement that they have established an agreement with IBM Softlayer (read about that acquisition here) to help customers migrate to those services as well as to those of Amazon Web Services (AWS), (read more about AWS in this primer here), Google and Microsoft Azure.

Cloud customer concerns?

With Nirvanix shutting down there has been plenty of articles, blog posts, twitter tweets and other conversations asking if Clouds are safe.

Btw, here is a link to my ongoing poll where you can cast your vote on what you think about clouds.

IMHO clouds can be safe if used in safe ways which includes knowing and addressing your concerns, not to mention following best practices, some of which pre-date the cloud era, sometimes by a few decades.

Nirvanix Storm Clouds

More on this in a moment, however lets touch base on Nirvanix and why I said they were finally shutting down.

The reason I say finally shutting down is that there were plenty of early warning signs and storm clouds circling Nirvanix for a few years now.

What I mean by this is that in their seven plus years of being in business, there have been more than a few CEO changes, something that is not unheard of.

Likewise there have been some changes to their business model ranging from selling their software as a service to a solution to hosting among others, again, smart startups and establishes organizations will adapt over time.

Nirvanix also invested heavily in marketing, public relations (PR) and analyst relations (AR) to generate buzz along with gaining endorsements as do most startups to get recognition, followings and investors if not real customers on board.

In the case of Nirvanix, the indicator signs mentioned above also included what seemed like a semi-annual if not annual changing of CEOs, marketing and others tying into business model adjustments.

cloud storage

It was only a year or so ago that if you gauged a company health by the PR and AR news or activity and endorsements you would have believed Nirvanix was about to crush Amazon, Rackspace or many others, perhaps some actually did believe that, followed shortly there after by the abrupt departure of their then CEO and marketing team. Thus just as fast as Nirvanix seemed to be the phoenix rising in stardom their aura started to dim again, which could or should have been a warning sign.

This is not to solo out Nirvanix, however given their penchant for marketing and now what appears to some as a sudden collapse or shutdown, they have also become a lightning rod of sort for clouds in general. Given all the hype and fud around clouds when something does happen the distract ors will be quick to jump or pile on to say things like "See, I told you, clouds are bad".

Meanwhile the cloud cheerleaders may go into denial saying there are no problems or issues with clouds, or they may go back into a committee meeting to create a new stack, standard, API set marketing consortium alliance. ;) On the other hand, there are valid concerns with any technology including clouds that in general there are good implementations that can be used the wrong way, or questionable implementations and selections used in what seem like good ways that can go bad.

This is not to say that clouds in general whether as a service, solution or product on a public, private or hybrid bases are any riskier than traditional hardware, software and services. Instead what this should be is a wake up call for people and organizations to review clouds citing their concerns along with revisiting what to do or can be done about them.

Clouds: Being prepared

Ben Woo of Neuralytix posted this question comment to one of the Linked In groups Collateral Considerations If You Were/Are A Nirvanix Customer which I posted some tips and recommendations including:

1) If you have another copy of your data somewhere else (which you should btw), how will your data at Nirvanix be securely erased, and the storage it resides on be safely (and secure) decommissioned?

2) if you do have another copy of your data elsewhere, how current is it, can you bring it up to date from various sources (including update from Nirvanix while they stay online)?

3) Where will you move your data to short or near term, as well as long-term.

4) What changes will you make to your procurement process for cloud services in the future to protect against situations like this happening to you?

5) As part of your plan for putting data into the cloud, refine your strategy for getting it out, moving it to another service or place as well as having an alternate copy somewhere.

Fwiw any data I put into a cloud service there is also another copy somewhere else which even though there is a cost, there is a benefit, The benefit is that ability to decide which to use if needed, as well as having a backup/spare copy.

Storage I/O trends

Cloud Concerns and Confidence

As part of cloud procurement as services or products, the same proper due diligence should occur as if you were buying traditional hardware, software, networking or services. That includes checking out not only the technology, also the companies financial, business records, customer references (both good and not so good or bad ones) to gain confidence. Part of gaining that confidence also involves addressing ahead of time how you will get your data out of or back from that services if needed.

Keep in mind that if your data is very important, are you going to keep it in just one place? For example I have data backed-up as well as archived to cloud providers, however I also have local copies either on-site or off.

Likewise there is data I have local kept at alternate locations including cloud. Sure that is costly, however by not treating all of my data and applications the same, I’m able to balance those costs out, plus use cost advantages of different services as well as on-site to be effective. I may be spending no less on data protection, in fact I’m actually spending a bit more, however I also have more copies and versions of important data and in multiple locations. Data that is not changing often does not get protected as often, however there are multiple copies to meet different needs or threat risks.

Storage I/O trends

Don’t be scared of clouds, be prepared

While some of the other smaller cloud storage vendors will see some new customers, I suspect that near to mid-term, it will be the larger, more established and well funded providers that gain the most from this current situation. Granted some customers are looking for alternatives to the mega cloud providers such as Amazon, Google, HP, IBM, Microsoft and Rackspace among others, however there are a long list of others some of which who are not so well-known that should be such as Centurylink/Savvis, Verizon/Terremark, Sungurd, Dimension Data, Peak, Bluehost, Carbonite, Mozy (owned by EMC), Xerox ACS, Evault (owned by Seagate) not to mention a long list of many others.

Something to be aware of as part of doing your due diligence is determining who or what actually powers a particular cloud service. The larger providers such as Rackspace, Amazon, Microsoft, HP among others have their own infrastructure while some of the smaller service providers may in fact use one of the larger (or even smaller) providers as their real back-end. Hence understanding who is behind a particular cloud service is important to help decide the viability and stability of who it is you are subscribed to or working with.

Something that I have said for the past couple of years and a theme of my book Cloud and Virtual Data Storage Networking (CRC Taylor & Francis) is do not be scared of clouds, however be ready, do your homework.

This also means having cloud concerns is a good thing, again don’t be scared, however find what those concerns are along with if they are major or minor. From that list you can start to decide how or if they can be worked around, as well as be prepared ahead of time should you either need all of your cloud data back quickly, or should that service become un-available.

Also when it comes to clouds, look beyond lowest cost or for free, likewise if something sounds too good to be true, perhaps it is. Instead look for value or how much do you get per what you spend including confidence in the service, service level agreements (SLA), security, and other items.

Keep in mind, only you can prevent data loss either on-site or in the cloud, granted it is a shared responsibility (With a poll).

Additional related cloud conversation items:
Cloud conversations: AWS EBS Optimized Instances
Poll: What Do You Think of IT Clouds?
Cloud conversations: Gaining cloud confidence from insights into AWS outages
Cloud conversations: confidence, certainty and confidentiality
Cloud conversation, Thanks Gartner for saying what has been said
Cloud conversations: AWS EBS, Glacier and S3 overview (Part III)
Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)
Don’t Let Clouds Scare You – Be Prepared
Everything Is Not Equal in the Datacenter, Part 3
Amazon cloud storage options enhanced with Glacier
What do VARs and Clouds as well as MSPs have in common?
How many degrees separate you and your information?

Ok, nuff said.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

WD buys nand flash SSD storage I/O cache vendor Virident

Storage I/O trends

WD buys nand flash SSD storage I/O cache vendor Virident

Congratulations to Virident for being bought today for $645 Million USD by Western Digital (WD). Virident a nand flash PCIe card startup vendor has been around for several years and in the last year or two has gain more industry awareness as a competitor to FusionIO among others.

There is a nand flash solid state devices (SSD) cash dash occurring, not to mention fast cache dances that is occurring the IT and data infrastructure (e.g. storage and IO) sector specifically.

Why the nand flash SSD cash dash and cache dance?

Here is a piece that I did today over at InfoStor on a related theme that sets the basis of why the nand flash-based SSD market is popular for storage and as a cache. Hence there is a flash cash dash and by some dance for increased storage I/O performance.

Like the hard disk drive (HDD) industry before it which despite what some pundits and profits have declared (for years if not decades) as being dead (it is still alive), there were many startups, shutdowns, mergers and acquisitions along with some transformations. Granted solid-state memories is part of the presence and future being deployed in new and different ways.

The same thing has occurred in the nand flash-based SSD sector with LSI acquiring SANDforce, SANdisk picking up Pliant and Flashsoft among others. Then there is Western Digital (WD) that recently has danced with their cash as they dash to buy up all things flash including Stec (drives & PCIe cards), Velobit (cache software), Virident (PCIe cards), along with Arkeia (backup) and an investment in Skyera.

Storage I/O trends

What about industry trends and market dynamics?

Meanwhile there have been some other changes with former industry darling and highflying stock post IPO FusionIO hitting as market reality and sudden CEO departure a few months ago. However after a few months of their stock being pummeled, today it bounced back perhaps as people now speculate who will buy FusionIO with WD picking up Virident. Note that one of Viridents OEM customers is EMC for their PCIe flash card XtremSF as are Micron and LSI.

Meanwhile Stec, also  now own by WD was also EMCs original flash SSD drive supplier or what they refer to as a EFDs (Electronic Flash Devices), not to mention having also supplied HDDs to them (also keep in mind WD bought HGST a year or so back).

There are some early signs as well as their stock price jumping today which was probably oversold. Perhaps people are now speculating that maybe Seagate who had been an investor in Virident which was bought by WD for $645 million today might be in the market for somebody else? Alternatively, that perhaps WD didn’t see the value in a FusionIO, or willing to make big flash cache cash grabs dash of that size? Also note Seagate won a $630 million (and the next appeal was recently upheld) infringement lawsuit vs. WD (here and here).

Does that mean FusionIO could become Seagate’s target or that of NetApp, Oracle or somebody else with the cash and willingness to dash, grab a chunk of the nand flash, and cache market?

Likewise, there are the software I/O and caching tool vendors some of which are tied to VMware and virtual servers vs. others that are more flexible that are gaining popularity. What about the systems or solution appliances play, could that be in the hunt for a Seagate?

Anything is possible however IMHO that would be a risky move, one that many at Seagate probably still remember from their experiment with Xiotech, not to mention stepping on the toes of their major OEM customer partners.

Storage I/O trends

Thus I would expect Seagate if they do anything would be more along the lines of a component type suppler meaning a FusionIO (yes they have Nexgen, however that could be easily dealt with), OCZ, perhaps even a LSI or Micron however some of those start to get rather expensive for a quick flash cache grab for some stock and cash.

Also, keep in mind that FusionIO in addition to having their PCIe flash cards also have the ioturbine software-caching tool that if you are not familiar with, IBM recently made an announcement of their Flash Cache Storage Accelerator (FCSA) that has an affiliation to guess who?

Closing comments (for now)

Some of the systems or solutions players will survive, perhaps even being acquired as XtremIO was by EMC, or file for IPO like Violin, or express their wish to IPO and or be bought such as all the others (e.g. Skyera, Whiptail, Pure, Solidfire, Cloudbyte, Nimbus, Nimble, Nutanix, Tegile, Kaminario, Greenbyte, and Simplivity among others).

Here’s the thing, those who really do know what is going to happen are not and probably cannot say, and those who are talking what will happen are like the rest of us, just speculating or providing perspectives or stirring the pot among other things.

So who will be next in the flash cache ssd cash dash dance?

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Is more of something always better? Depends on what you are doing

Storage I/O trends

Is more always better? Depends on what you are doing

As with many things it depends, however how about some of these?

Is more better for example (among others):

  • Facebook likes
  • Twitter followers or tweets (I’m @storageio btw)
  • Google+ likes, follows and hangouts
  • More smart phone apps
  • LinkedIn connections
  • People in your circle or community
  • Photos or images per post or article
  • People working with or for you
  • Partners vs. doing more with those you have
  • People you are working for or with
  • Posts or longer posts with more in them
  • IOPs or SSD and storage performance
  • Domains under management and supported
  • GB/TB/PB/EB supported or under management
  • Mart-time jobs or a better full-time opportunity
  • Metrics vs. those that matter with context
  • Programmers to get job done (aka mythical man month)
  • Lines of code per cost vs. more reliable and tested code per cost
  • For free items and time spent managing them vs. more productivity for a nominal fee
  • Meetings for planning on what to do vs. streamline and being more productive
  • More sponsors or advertisers or underwriters vs. fewer yet more effective ones
  • Space in your booth or stand at a trade show or conference vs. using what you have more effectively
  • Copies of the same data vs. fewer yet more unique (not full though) copies of information
  • Patents in your portfolio vs. more technology and solutions being delivered
  • Processors, sockets, cores, threads vs. using them more effectively
  • Ports and protocols vs. using them more effectively

Storage I/O trends

Thus does more resources matter, or making more effective use of them?

For example more ports, protocols, processors, cores, sockets, threads, memory, cache, drives, bandwidth, people among other things is not always better, particular if those resources are not being used effectively.

Likewise don’t confuse effective with efficient often assumed to mean used.

For example a cache or memory may be 100% used (what some call efficient) yet only providing a 35% effective benefit (cache hit or miss) vs. cache turn (misses etc).

Throwing more processing power in terms of clock speed, or cores is one thing, kind of like throwing more server blades at a software problem vs. using those cores and sockets not to mention threads more effectively.

Good software will run better on fast hardware while enabling more to be done with the same or less.

Thus with better software or tools, more work can be done in an effective way leveraging those resources vs. simply throwing or applying more at the situation.

Hopefully you get the point, so no need to do more with this post (for now), if not, stay tuned and pay more attention around you.

Ok, nuff said, I need to go get more work done now.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Fall 2013 Dutch cloud, virtual and storage I/O seminars

Storage I/O trends

Fall 2013 Dutch cloud, virtual and storage I/O seminars

It is that time of the year again when StorageIO will be presenting a series of seminar workshops in the Netherlands on cloud, virtual and data storage networking technologies, trends along with best practice techniques.

Brouwer Storage

StorageIO partners with the independent firm Brouwer Storage Consultancy of Holland who organizes these sessions. These sessions will also mark Brouwer Storage Consultancy celebrating ten years in business along with a long partnership with StorageIO.

Server Storage I/O Backup and Data Protection Cloud and Virtual

The fall 2013 Dutch seminars include coverage of storage I/O networking data protection and related trends topics for cloud and virtual environments. Click on the following links or images to view an abstract of the three sessions including what you will learn, who they are for, buzzwords, themes, topics and technologies that will covered.

Modernizing Data Protection
Moving Beyond Backup and Restore

Storage Industry Trends
What’s News, What’s The Buzz and Hype

Storage Decision Making
Acquisition, Deployment, Day to Day Management

Modern Data Protection
Modern Data Protection
Modern Data Protection
September 30 & October 1
October 2 2013
October 3 and 4 2013

All seminar workshop seminars are presented in a vendor technology neutral including (e.g. these are not vendor marketing sales presentations) providing independent perspectives on industry trends, who is doing what, benefits, caveats of various approaches to addressing data infrastructure and storage challenges. View posts about earlier events here and here.

Storage I/O trends

As part of theme of being vendor and technology neutral, the workshop seminars are held off-site at hotel venues in Nijkerk Netherlands so no need to worry about the sales teams coming in to sell you something during the breaks or lunch which are provided. There are also opportunities throughout the workshops for engagement, discussion and interaction with other attendees that includes your peers from various commercial, government and service providers among others.

Learn more and register for these events by visiting the Brouwer Storage Consultancy website page (here) and calling them at +31-33-246-6825 or via email info@brouwerconsultancy.com.

Storage I/O events

View other upcoming and recent StorageIO activities including live in-person, online web and recorded activities on our events page here, as well as check out our commentary and industry trends perspectives in the news here.

Bitter ballen
Ok, nuff said, I’m already hungry for bitter ballen (see above)!

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Summer 2013 Server and StorageIO Update Newsletter

StorageIO 2013 Summer Newsletter

Cloud, Virtualization, SSD, Data Protection, Storage I/O

Welcome to the Summer 2013 (combined July and August) edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics.

StorageIO News Letter Image
Summer 2013 News letter

This summer has been far from quiet on the merger and acquisitions (M&E) front with Western Digital (WD) continuing its buying spree including Stec among others. There is the HDS Mid Summer Storage and Converged Compute Enhancements and EMC Evolves Enterprise Data Protection with Enhancements (Part I and Part II).

With VMworld just around the corner along with many other upcoming events, watch for more announcements to be covered in future editions and on StorageIOblog as we move into fall.

Click on the following links to view the Summer 2013 edition as (HTML sent via Email) version, or PDF versions. Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Ok Nuff said, for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

IBM Server Side Storage I/O SSD Flash Cache Software

Storage I/O trends

IBM Server Side Storage I/O SSD Flash Cache Software

As I often say, the best server storage I/O or IOP is the one that you do not have to do. The second best storage I/O or IOP is the one with least impact or that can be done in a cost-effective way. Likewise the question is not if solid-state device (SSD) including nand flash are in your future, rather when, where, why, with what, how much along with from whom. Also location matters when it comes to SSD including nand flash with different environments and applications leveraging different placement (locality) options, not to mention how much performance do you need vs. want?

As part of their $1 billion USD (to be spent over three years, or $333.3333 million per year) flash ahead initiative IBM has announced their Flash Cache Storage Accelerator (FCSA) server software. While IBM did not use the term, (congratulations and thank you btw) some creative marketer might want to try calling this Software Defined Cache (SDC) or Software Defined SSD (SDSSD) which if that occurs, apologies in advance ;). Keep in mind that it was about a year ago this time when IBM announced that they were acquiring SSD industry veteran Texas Memory Systems (TMS).

What was announced, introducing Flash Cache Storage Acceleration or FCSA

With this announcement of FCSA slated for customer general availability by end of August, IBM joins EMC and NetApp among other storage systems vendors who developed their own, or have collaborated on server-side IO optimization and cache software. Some of the other startup and established vendors who have IO optimization, performance acceleration and caching software include DataRam (Ramdisk), FusionIO, Infinio (NFS for VMware), Pernix (block for VMware), Proximal and SANdisk (bought flashsoft) among others.

Read more about IBM Flash Cache Software (FCSA) including various questions and perspectives in part two of this two-part post located here.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Server and Storage IO Memory: DRAM and nand flash

Storage I/O trends

DRAM, DIMM, DDR3, nand flash memory, SSD, stating what’s often assumed

Often what’s assumed is not always the case. For example in along with around server, storage and IO networking circles including virtual as well as cloud environments terms such as nand (Negated AND or NOT And) flash memory aka (Solid State Device or SSD), DRAM (Dynamic Random Access Memory), DDR3 (Double Data Rate 3) not to mention DIMM (Dual Inline Memory Module) get tossed around with the assumption everybody must know what they mean.

On the other hand, I find plenty of people who are not sure what those among other terms or things are, sometimes they are even embarrassed to ask, particular if they are a self-proclaimed expert.

So for those who need a refresh or primer, here you go, an excerpt from Chapter 7 (Servers – Physical, Virtual and Software) from my book "The Green and Virtual Data Center" (CRC Press) available at Amazon.com and other global venues in print and ebook formats.

7.2.2 Memory

Computers rely on some form of memory ranging from internal registers, local on-board processor Level 1 (L1) and Level 2 (L2) caches, random accessible memory (RAM), non-volatile RAM (NVRAM) or nand Flash (SSD) along with external disk storage. Memory, which includes external disk storage, is used for storing operating system software along with associated tools or utilities, application programs and data. Main memory or RAM, also known as dynamic RAM (DRAM) chips, is packaged in different ways with a common form being dual inline memory modules (DIMMs) for notebook or laptop, desktop PC and servers.

RAM main memory on a server is the fastest form of memory, second only to internal processor or chip based registers, L1, L2 or local memory. RAM and processor based memories are volatile and non-persistent in that when power is removed, the contents of memory are lost. As a result, some form of persistent memory is needed to keep programs and data when power is removed. Read only memory (ROM) and NVRAM are both persistent forms of memory in that their contents are not lost when power is removed. The amount of RAM that can be installed into a server will vary with specific architecture implementation and operating software being used. In addition to memory capacity and packaging format, the speed of memory is also important to be able to move data and programs quickly to avoid internal bottlenecks. Memory bandwidth performance increases with the width of the memory bus in bits and frequency in MHz. For example, moving 8 bytes on a 64 bit buss in parallel at the same time at 100MHz provides a theoretical 800MByte/sec speed.

To improve availability and increase the level of persistence, some servers include battery backed up RAM or cache to protect data in the event of a power loss. Another technique to protect memory data on some servers is memory mirroring where twice the amount of memory is installed and divided into two groups. Each group of memory has a copy of data being stored so that in the event of a memory failure beyond those correctable with standard parity and error correction code (ECC) no data is lost. In addition to being fast, RAM based memories are also more expensive and used in smaller quantities compared to external persistent memories such as magnetic hard disk drives, magnetic tape or optical based memory medias.

Memory diagram
Memory and Storage Pyramid

The above shows a tiered memory model that may look familiar as the bottom part is often expanded to show tiered storage. At the top of the memory pyramid is high-speed processor memory followed by RAM, ROM, NVRAM and FLASH along with many forms of external memory commonly called storage. More detail about tiered storage is covered in chapter 8 (Data Storage – Data Storage – Disk, Tape, Optical, and Memory). In addition to being slower and lower cost than RAM based memories, disk storage along with NVRAM and FLASH based memory devices are also persistent.

By being persistent, when power is removed, data is retained on the storage or memory device. Also shown in the above figure is that on a relative basis, less energy is used for power storage or memory at the bottom of the pyramid than for upper levels where performance increases. From a PCFE (Power, Cooling, Floor space, Economic) perspective, balancing memory and storage performance, availability, capacity and energy to a given function, quality of service and service level objective for a given cost needs to be kept in perspective and not considering simply the lowest cost for the most amount of memory or storage. In addition to gauging memory on capacity, other metrics include percent used, operating system page faults and page read/write operations along with memory swap activity as well memory errors.

Base 2 versus base 10 numbering systems can account for some storage capacity that appears to “missing” when real storage is compared to what is expected to be seen. Disk drive manufacturers use base 10 (decimal) to count bytes of data while memory chip, server and operating system vendors typically use base 2 (binary) to count bytes of data. This has led to confusion when comparing a disk drive base 10 GB with a chip memory base 2 GB of memory capacity, such as 1,000,000,000 (10^9) bytes versus 1,073,741,824 (2^30) bytes. Nomenclature based on the International System of Units uses MiB, GiB and TiB to denote million, billion and trillion bytes for base 2 numbering with base 10 using MB, TB and GB . Most vendors do document how many bytes, sometimes in both base 2 and base 10, as well as the number of 512 byte sectors supported on their storage devices and storage systems, though it might be in the small print.

Related more reading:
How much storage performance do you want vs. need?
Can RAID extend the life of nand flash SSD?
Can we get a side of context with them IOPS and other storage metrics?
SSD & Real Estate: Location, Location, Location
What is the best kind of IO? The one you do not have to do
SSD, flash and DRAM, DejaVu or something new?

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier).

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Non Disruptive Updates, Needs vs. Wants

Storage I/O trends

Do you want non disruptive updates or do you need non disruptive upgrades?

First there is a bit of play on words going on here with needs vs. wants, as well as what is meant by non disruptive.

Regarding needs vs. wants, they are often used interchangeably particular in IT when discussing requirements or what the customer would like to have. The key differentiator is that a need is something that is required and somehow cost justified, or hopefully easier than a want item. A want or like to have item is simply that, its not a need however it could add value being a benefit although may be seen as discretionary.

There is also a bit of play on words with non disruptive updates or upgrades that can take on different meanings or assumptions. For example my Windows 7 laptop has automatic Microsoft updates enabled some of which can be applied while I work. On the other hand, some of those updates may be applied while I work however they may not take effect until I reboot or exit and restart an application.

This is not unique to Windows as my Ubuntu and Centos Linux systems can also apply updates, and in some cases a reboot might be required, same with my VMware environment. Lets not forget about applying new firmware to a server, or workstation, laptop or other device, along with networking routers, switches and related devices. Storage is also not immune as new software or firmware can be applied to a HDD or SSD (traditional or NVMe), either by your workstation, laptop, server or storage system. Speaking of storage systems, they too have new software or firmware that gets updated.

Storage I/O trends

The common theme here though is if the code (e.g. software, firmware, microcode, flash update, etc) can be applied non disruptive something known as non disruptive code load, followed by activation. With activation, the code may have been applied while the device or software was in use, however may need a reboot or restart. With non disruptive code activation, there should not be a disruption to what is being done when the new software takes effect.

This means that if a device supports non disruptive code load (NDCL) updates along with non disruptive code activation (NDCA), the upgrade can occur without disruption or having to wait for a reboot.

Which is better?

That depends, I want NDCA, however for many things I only need NDCL.

On the other hand, depending on what you need, perhaps it is both NDCL and NDCA, however also keep in mind needs vs. wants.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved