Hello and welcome to this September 2015 Server StorageIO update newsletter. Summer has wrapped up here in the northern hemisphere which means the fall conference season has started. In addition to large conferences, there are also many smaller events including the sessions I will be doing in Nijkerk Holland week of October 13-16, along with others (in-person and on-line) throughout the fall.
Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.
Recent Server StorageIO articles appearing in different venues include:
NetworkComputing: Selecting Storage: It’s All About The Applications Choosing the right storage for your applications depends on using the PACE model, evaluating Performance, Availability, Capacity and Economics (e.g. PACE). Often when I discuss mainstream applications with people, the perception is that bandwidth only applies to big data and analytics, video, and high-performance compute (HPC) or supercomputing applications such as those used in the seismic, geo, energy, video security surveillance, or entertainment industries. The reality is that those applications can be bandwidth or throughput intensive, but they can also need a large number of small I/Os that need many IOPs to handle metadata related processing. Even bulk storage repositories for archiving, solutions using scale-out NAS, and object storage have a mix of IOPs and bandwidth. Read more here.
EnterpriseStorageForum: NAND, DRAM, SAS/SCSI and SATA/AHCI: Not Dead, Yet
Manufacturers are coming out with new non-volatile memory (NVM) media like3D XPoint. Does that mean that DRAM and other NVM media such as NAND flash are now dead?
Do new NVM storage access protocols such as NVM Express (NVMe) mean SCSI/SAS and AHCI/SATA are now dead?
My simple answer is no, they all have bright futures. Read more here.
Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here
October 13 – Symposium: Software Defined Storage Management October 14 – Server Storage I/O Fundamental Trends October 15 – Symposium – Data Center Infrastructure Management (DCIM) October 16 – “Converged Day” Server and Storage Decision making
September 23 – Webinar Redmond Magazine & Dell Data Protection The New World Order of Data Protection – Focus on Recovery Learn more about the 9Rs of data protection and recovery
The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of my books.
Seven Databases in Seven Weeks (A Guide to Modern Databases and the NoSQL Movement) is a book written Eric Redmond (@coderoshi) and Jim Wilson (@hexlib), that takes a look at several non SQL based database systems. Coverage includes PostgreSQL, Riak, Apache HBase, MongoDB, Apache CouchDB, Neo4J and Redis with plenty of code and architecture examples. Also covered include relational vs. key value, columnar and document based systems among others. Read more here.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Fall 2015 Server Storage I/O Cloud Virtual Seminars Going Dutch
It’s that time of the year again where the fall 2015 events and activities are underway which also includes a week of sessions in Holland October 13-16. I will be participating in four days of workshop seminars being organized by Brouwer Storage Consultancy in Nijkerk covering server storage decision-making, converged and bulk storage options, software defined storage management, data center infrastructure management and data protection along with industry trends and update sessions.
October 13th: Symposium – Software Defined Storage Management
09:00 -17:00
DOWNLOAD FLYER (Dutch)
REGISTER HERE
FREE Session! Access for end-users only, through invitation or contacting BSC.
Event Location: Hotel & Gasterij De Roode Schuur, Oude Barneveldseweg 98, 3862PS Nijkerk – www.deroodeschuur.nl
October 14th: Server Storage I/O Fundamental Trends V2.015 – What’s New, What’s the buzz, what you need to know about.
09:00 -17:00
DOWNLOAD Abstract/Agenda
REGISTER HERE
Event Location: Golden Tulip Ampt van Nijkerk Hotel, Berencamperweg 4, 3861MC, Nijkerk – www.goldentulipamptvannijkerk.com/en
October 15th: Symposium – Data Center Infrastructure Management
09:00 -17:00
DOWNLOAD Abstract / Agenda
REGISTER Here
FREE Session! Access, through invitation or contacting BSC.
Event Location: Hotel & Gasterij De Roode Schuur, Oude Barneveldseweg 98, 3862PS Nijkerk – www.deroodeschuur.nl
October 16th: "Converged Day" Server and Storage Decision making – How do you want or need your storage packaged?
09:00 -17:00
DOWNLOAD Abstract / Agenda
REGISTER HERE
Event Location: Golden Tulip Ampt van Nijkerk Hotel, Berencamperweg 4, 3861MC, Nijkerk – www.goldentulipamptvannijkerk.com/en
Learn more at the Brouwer Storage Consultancy site here, or getting in touch with them to reserve your seat at these events.
Office: Olevoortseweg 43 3861 MH Nijkerk The Netherlands
T +31-33-246-6825 C +31-652-601-309 F +31-33-245-8956 E info@brouwerconsultancy.com
Watch for more events, seminars, live video, webinars and virtual trade shows by visiting the StorageIO events page.
What this all means and wrap up
Smart server and storage for cloud, virtual and physical or legacy environments starts with being informed, knowing your requirements, options and having insight into industry trends that are applicable to your environment. These sessions are vendor and technology neutral held off-site at hotel venues in Nijkerk Netherlands so no need to worry about the sales teams coming in to sell you something during the breaks or lunch which are provided. There are also opportunities throughout the workshops for engagement, discussion and interaction with other attendees that includes your peers from various commercial, government and service providers among others. Hope to see in Nijkerk to discuss server stowage I/O cloud virtual and other industry trends, technologies, techniques in October.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved
Hello and welcome to this August 2015 Server StorageIO update newsletter. Summer is wrapping up here in the northern hemisphere which means the fall conference season has started, holidays in progress as well as getting ready for back to school time. I have been spending my summer working on various things involving servers, storage, I/O networking hardware, software, services from cloud to containers, virtual and physical. This includes OpenStack, VMware vCloud Air, AWS, Microsoft Azure, GCS among others, as well as new versions of Microsoft Windows and Servers, Non Volatile Memory (NVM) including flash SSD, NVM Express (NVMe), databases, data protection, software defined, cache, micro-tiering and benchmarking using various tools among other things (some are still under wraps).
Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.
Feature Topic – Non Volatile Memory including NAND flash SSD
Via Intel: Click above image to view history of memory
This months feature topic theme is Non Volatile Memory (NVM) which includes technologies such as NAND flash commonly used in Solid State Devices (SSDs) storage today, as well as in USB thumb drive, mobile and hand-held devices among many other uses. NVM spans servers, storage, I/O devices along with mobile and handheld among many other technologies. In addition to NAND flash, other forms of NVM include Non Volatile Random Access Memory (NVRAM), Read Only Memory (ROM) along with some emerging new technologies including the recently announced Intel and Micron 3D XPoint among others.
NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
Spot The Newest & Best Server Trends (Via Processor)
Market ripe for embedded flash storage as prices drop (Via Powermore (Dell))
Continue reading more about NVM, NVMe, NAND flash, SSD Server and storage I/O related topics at www.thessdplace.com as well as about I/O performance, monitoring and benchmarking tools at www.storageperformance.us.
Recent Server StorageIO articles appearing in different venues include:
IronMountain: Information Lifecycle Management: Which Data Types Have Value? It’s important to keep in mind that on a fundamental level, there are three types of data: information that has value, information that does not have value and information that has unknown value. Data value can be measured along performance, availability, capacity and economic attributes, which define how the data gets managed across different tiers of storage. In general data can have value, unknown value or no value. Read more here.
EnterpriseStorageForum: Is Future Storage Converging Around Hyper-Converged? Depending on who you talk or listen to, hyper-converged storage is either the future of storage, or it is a hype niche market that is not for everybody, particular not larger environments. How converged is the hyper-converged market? There are many environments that can leverage CI along with HCI, CiB or other bundles solutions. Granted, not all of those environments will converge around the same CI, CiB and HCI or pod solution bundles as everything is not the same in most IT environments and data centers. Not all markets, environments or solutions are the same. Read more here.
Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here
Enmotus FuzeDrive provides micro-tiering boosting performance (reads and writes) of storage attached to physical bare metal servers, virtual and cloud instances including Windows and Linux operating systems across various applications. In the simple example above five separate SQL Server databases (260GB each) were placed on a single 6TB HDD. A TPCC workload was run concurrently against all databases with various numbers of users. One workload used a single 6TB HDD (blue) while the other used a FuzeDrive (green) comprised of a 6TB HDD and a 400GB SSD showing basic micro-tiering improvements.
The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of my books.
While not a technology book, you do not have to be at or near retirement age to be planning for retirement. Some of you may already be at or near retirement age, for others, its time to start planning or refining your plans. A friend recommended this book and I’m recommending it to others. Its pretty straight forward and you might be surprised how much money people may be leaving on the table! Check it out here at Amazon.com.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates
I attended the Flash Memory Summit in Santa Clara CA last week and not surprisingly there were many announcements about Non-Volatile Memory (NVM) along with related enabling technologies. Some of these announcements were component based intended for original equipment manufactures (OEMs) ranging from startup to established, systems integrators (SI), value added resellers (VAR’s) while others were more customer solution focused. From a customer solution focus, some of the technologies were consumer oriented while others for business and some for cloud scale service providers.
Recent NVM, NVMe and Flash SSD news
A sampling of some recent NVM, NVMe and Flash related news includes among others:
New SATA SSD powers elastic cloud agility for CSPs (Via Cbronline)
Toshiba Solid-State Drive Family Features PCIe Technology (Via Eweek)
SanDisk aims CloudSpeed Ultra SSD at cloud providers (Via ITwire)
Everspin & Aupera show all-MRAM Storage Module in M.2 Form Factor (Via BusinessWire)
Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage (part I, part II and part III)
PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
Seagate Grows Its Nytro Enterprise Flash Storage Line (Via InfoStor)
New SAS Solid State Drive First Product From Seagate Micron Alliance (Via Seagate)
Wow, Samsung’s New 16 Terabyte SSD Is the World’s Largest Hard Drive (Via Gizmodo)
Samsung ups the SSD ante with faster, higher capacity drives (Via ITworld)
NVMe primer
Via Intel: Click above image to view history of memory via Intel site
NVM includes technologies such as NAND flash commonly used in Solid State Devices (SSD’s) storage today, as well as in USB thumb drive, mobile and hand-held devices among many other uses. NVM spans servers, storage, I/O devices along with mobile and handheld among many other technologies. In addition to NAND flash, other forms of NVM include Non Volatile Random Access Memory (NVRAM), Read Only Memory (ROM) along with some emerging new technologies including the recently announced Intel and Micron 3D XPoint among others.
Server Storage I/O memory (and storage) hierarchy
Keep in mind that memory is storage and storage is persistent memory as well as that there are different classes, categories and tiers of memory and storage as shown above to meet various performance, availability, capacity and economic requirements. Besides NVM ranging from flash to NVRAM to emerging 3D XPoint among others, another popular topic that is gaining momentum is NVM Express (NVMe). NVMe (more material here at www.thenvmeplace.com) is a new server storage I/O access method and protocol for fast access to NVM based products. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.
Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.
Leveraging the common PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5" drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.
NVMe as a "back-end" I/O interface in a server or storage system accessing NVM storage/media devices
NVMe as a “front-end” interface for servers (or storage systems/appliances) to use NVMe based storage systems
NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can in addition to being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.
NVMe and shared PCIe
NVMe features
Main features of NVMe include among others:
Lower latency due to improve drivers and increased queues (and queue sizes)
Lower CPU used to handler larger number of I/Os (more CPU available for useful work)
Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
Bandwidth improvements leveraging various fast PCIe interface and available lanes
Dual-pathing of devices like what is available with dual-path SAS devices
Unlock the value of more cores per processor socket and software threads (productivity)
Various packaging options, deployment scenarios and configuration options
Appears as a standard storage device on most operating systems
Plug-play with in-box drivers on many popular operating systems and hypervisors
MSP CMG, September 2014 Presentation (Flash back to reality – Myths and Realities Flash and SSD Industry trends perspectives plus benchmarking tips) – PDF
Spot The Newest & Best Server Trends (Via Processor)
Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage (part I, part II and part III)
Market ripe for embedded flash storage as prices drop (Via Powermore (Dell))
Continue reading more about NVM, NVMe, NAND flash, SSD Server and storage I/O related topics at www.thessdplace.com as well as about I/O performance, monitoring and benchmarking tools at www.storageperformance.us.
What this all means and wrap up
The question is not if NVM is in your future, it is! Instead the questions are what type of NVM including NAND flash among other mediums will be deployed where, using what type of packaging or solutions (drives, cards, systems, appliances, cloud) for what role (as storage, primary memory, persistent cache) along with how much among others. For some environments the solution is already, or will be All NVM Arrays (ANA) or All Flash Arrays (AFA) or All SSD Arrays (ASA) while for others the home run will be hybrid based solutions that work for you, fitting in and adapting to your environment as it changes.
Also keep in mind that a little bit of fast memory including NVM based flash among others in the right place can have a big benefit. My experiences using NVMe to use flash enabled NVMe devices on Windows and Linux systems is that you can see lower response times at higher-IOP’s however also with lower CPU consumption particular when compared to 6Gbps SATA. Likewise bandwidth can easily be pushed to the limits of the NVMe device as well as PCIe interface being used such as x4 or x8 depending on implementation. That is also a warning and something to watch out for comparing apples to oranges in that while NVMe uses PCIe, understand when looking at different results if those are for x4 or x8 or faster PCIe as their mere presence of using PCIe does not mean you are running at full potential.
Keep an eye on NVMe as a new high-speed, low-latency server storage I/O access protocol for unlocking the full performance capabilities of fast NVM based storage as well as leveraging the multiple cores in today’s fast processors. Does this mean AHCI/SATA or SCSI/SAS are now dead? Some will claim that, however at least near-term for next few years (if not longer), those interfaces will continue to be used where they make sense, as well as where they can save dollars specifically for cost sensitive, high-capacity environments that do not need the full performance of NVMe just yet.
As for the Flash Memory Summit event in Santa Clara, that was a good day with time well spent in briefings, meetings, demo’s and add hoc discussions on the expo floor.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Some August 2015 Amazon Web Services (AWS) and Microsoft Azure Cloud Updates
Cloud Services Providers continue to extend their feature, function and capabilities and the following are two examples. Being a customer of both Amazon Web Services (AWS) as well as Microsoft Azure (among others), I receive monthly news updates about service improvements along with new features. Here are a couple of examples involving recent updates from AWS and Azure.
Azure enhancements
Azure Premium Storage generally available in Japan East
Solid State Device (SSD) based Azure Premium Storage is now available in Japan East region. Add up to 32 TB and more than 64,000 IOPs (read operations) per virtual machine with Azure Premium Storage. Learn more about Azure storage and pricing here.
Azure Data Factory generally available
Data Factory is a cloud based data integration service for automated management as well as movement and transformation of data, learn more and view pricing options here.
AWS Partner Updates
Recent Amazon Web Services (AWS) customer update included the following pertaining to partner storage solutions.
Learn more about AWS Partner Network (APN) here or click on the above image.
Primary Cloud File and NAS storage complementing on-premises (e.g. your local) storage
Avere
Ctera
NetApp (Cloud OnTap)
Panzura
SoftNAS
Zadara
Secure File Transfer
Aspera
Signiant
Note that the above are those listed on the AWS Storage Partner Page as of this being published and subject to change. Likewise other solutions that are not part of the AWS partner program may not be listed.
How do primary storage clouds and cloud for backup differ?
What’s most important to know about my cloud privacy policy?
What this all means and wrap up
Cloud Service Providers (CSP) continue to enhance their capabilities, as well as their footprints as part of growth. In addition to technology, tools and number of regions, sites and data centers, the CSPs are also expanding their partner networks both about how many partners, also in the scope of those partnerships. Some of these partnerships are in the scope of the cloud as a destination, others are for enabling hybrid where public clouds become an extension complementing traditional IT. Everything is not the same in most environments and one type of cloud approach does not have to suit or fit all needs, hence the value of hybrid cloud deployment and usage.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Supermicro CSE-M14TQC Use your media bay to add 12 Gbps SAS SSD drives to your server
Do you have a computer server, workstation or mini-tower PC that needs to have more 2.5" form factor hard disk drive (HDD), solid state device (SSD) or hybrid flash drives added yet no expansion space?
Do you also want or need the HDD or SSD drive expansion slots to be hot swappable, 6 Gbps SATA3 along with up to 12 Gbps SAS devices?
Do you have an available 5.25" media bay slot (e.g. where you can add an optional CD or DVD drive) or can you remove your existing CD or DVD drive using USB for software loading?
Do you need to carry out the above without swapping out your existing server or workstation on a reasonable budget, say around $100 USD plus tax, handling, shipping (your prices may vary)?
If you need implement the above, then here is a possible solution, or in my case, an real solution.
Supermicro CSE-M14TQC with hot swap canister before installing in one of my servers
In the past I have used a solution from Startech that supports up to 4 x 2.5" 6 Gbps SAS and SATA drives in a 5.25" media bay form factor installing these in my various HP, Dell and Lenovo servers to increase internal storage bays (slots).
Via Amazon.com StarTech 4 x 2.5" SAS and SATA internal enclosure
I still use the StarTech device shown (read earlier reviews and experiences here, here and here) above in some of my servers which continue to be great for 6Gbps SAS and SATA 2.5" HDDs and SSDs. However for 12 Gbps SAS devices, I have used other approaches including external 12 Gbps SAS enclosures.
Recently while talking with the folks over at Servers Direct, I mentioned how I was using StarTech 4 x 2.5" 6Gbps SAS/SATA media bay enclosure as a means of boosting the number of internal drives that could be put into some smaller servers. The Servers Direct folks told me about the Supermicro CSE-M14TQC which after doing some research, I decided to buy one to complement the StarTech 6Gbps enclosures, as well as external 12 Gbps SAS enclosures or other internal options.
What is the Supermicro CSE-M14TQC?
The CSE-M14TQC is a 5.25" form factor enclosure that enables four (4) 2.5" hot swappable (if your adapter and OS supports hot swap) 12 Gbps SAS or 6 Gbps SATA devices (HDD and SSD) to fit into the media bay slot normally used by CD/DVD devices in servers or workstations. There is a single Molex male power connector on the rear of the enclosure that can be used to attach to your servers available power using applicable connector adapters. In addition there are four seperate drive connectors (e.g. SATA type connectors) that support up to 12 Gbps SAS per drive which you can attach to your servers motherboard (note SAS devices need a SAS controller), HBA or RAID adapters internal ports.
Cooling is provided via a rear mounted 12,500 RPM 16 cubic feet per minute fan, each of the four drives are hot swappable (requires operating system or hypervisor support) contained in a small canister (provided with the enclosure). Drives easily mount to the canister via screws that are also supplied as part of the enclosure kit. There is also a drive activity and failure notification LED for the devices. If you do not have any available SAS or SATA ports on your servers motherboard, you can use an available PCIe slot and add a HBA or RAID card for attaching the CSE-M14TQC to the drives. For example, a 12 Gbps SAS (6 Gbps SATA) Avago/LSI RAID card, or a 6 Gbps SAS/SATA RAID card.
Via Supermicro CSE-M14TQC rear details (4 x SATA and 1 Molex power connector)
CSE-M14TQCrear view before installation
CSE-M14TQC ready for installation with 4 x SATA (12 Gbps SAS) drive connectors and Molex power connector
Tip: In the case of the Lenovo TS140 that I initially installed the CSE-M14TQC into, there is not a lot of space for installing the drive connectors or Molex power connector to the enclosure. Instead, attach the cables to the CSE-M14TQC as shown above before installing the enclosure into the media bay slot. Simply attach the connectors as shown and feed them through the media bay opening as you install the CSE-M14TQC enclosure. Then attach the drive connectors to your HBA, RAID card or server motherboard and the power connector to your power source inside the server.
Note and disclaimer, pay attention to your server manufactures power loading and specification along with how much power will be used by the HDD or SSD’s to be installed to avoid electrical power or fire issues due to overloading!
CSE-M14TQC installed into Lenovo TS140 empty media bay
CSE-M14TQC installed with front face plated installed on Lenovo TS140
If you have a server that simply needs some extra storage capacity by adding some 2.5" HDDs, or boosting performance with fast SSDs yet do not have any more internal drive slots or expansion bays, leverage your media bay. This applies to smaller environments where you might have one or two servers, as well as for environments where you want or need to create a scale out software defined storage or hyper-converged platform using your own hardware. Another option is that if you have a lab or test environment for VMware vSphere ESXi Windows, Linux, Openstack or other things, this can be a cost-effective approach to adding both storage space capacity as well as performance and leveraging newer 12Gbps SAS technologies.
For example, create a VMware VSAN cluster using smaller servers such as Lenovo TS140 or equivalent where you can install a couple of 6TB or 8TB higher capacity 3.5" drive in the internal drive bays, then adding a couple of 12 Gbps SAS SSDs along with a couple of 2.5" 2TB (or larger) HDDs along with a RAID card, and high-speed networking card. If VMware VSAN is not your thing, how about setting up a Windows Server 2012 R2 failover cluster including Scale Out File Server (SOFS) with Hyper-V, or perhaps OpenStack or one of many other virtual storage appliances (VSA) or software defined storage, networking or other solutions. Perhaps you need to deploy more storage for a big data Hadoop based analytics system, or cloud or object storage solution? On the other hand, if you simply need to add some storage to your storage or media or gaming server or general purpose server, the CSE-M14TQC can be an option along with other external solutions.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Breaking the VMware ESXi 5.5 ACPI boot loop on Lenovo TD350
Do you have a Lenovo TD350 or for that many other servers that when trying to load or run VMware vSphere ESXi 5.5 u2 (or other versions) and run into the boot loop at the “Initializing ACPI” point?
VMware ACPI boot loop
The symptoms are that you see ESXi start its boot process, loading drivers and modules (e.g. black screen), then you see the Yellow boot screen with Timer and Scheduler initialized, and at the “Initializing ACPI” point, ka boom, either a boot loop starts (e.g. the above processes repeats after system boots).
The fix is actually pretty quick and simple, finding it took a bit of time, trial and error.
There were of course the usual suspects such as
Checking to BIOS and firmware version of the motherboard on the Lenovo TD350 (checked this, however did not upgrade)
Making sure that the proper VMware ESXi patches and updates were installed (they were, this was a pre built image from another working server)
Having the latest installation media if this was a new install (tried this as part of trouble shooting to make sure the pre built image was ok)
Remove any conflicting devices (small diversion hint: make sure if you have cloned a working VMware image to an internal drive that it is removed to avoid same file system UUID errors)
Boot into BIOS making sure that for processor VT is enabled, for SATA that AHCI is enabled for any drives as opposed to IDE or RAID, and that for boot, make sure set to Legacy vs. Auto (e.g. disable UEFI support) as well as verify boot order. Having been in auto mode for UEFI support for some other activity, this was easy to change, however was not the magic silver bullet I was looking for.
Breaking the VMware ACPI boot loop on Lenovo TD350
After doing some searching and coming up with some interesting and false leads, as well as trying several boots, BIOS configuration changes, even cloning the good VMware ESXi boot image to an internal drive if there was a USB boot issue, the solution was rather simple once found (or remembered).
Lenovo TD350 BIOS basic settings
Lenovo TD350 processor settings
Make sure that in your BIOS setup under PCIE that you have that you disable “Above 4GB decoding".
Turns out that I had enabled "Above 4GB decoding" for some other things I had done.
Lenovo TD350 disabling above 4GB decoding on PCIE under advanced settings
Once I made the above change, press F10 to save BIOS settings and boot, VMware ESXi had no issues getting past the ACPI initializing and the boot loop was broken.
Lenovo ThinkServer TD340 Server and StorageIO lab Review
Part II: Lenovo TS140 Server and Storage I/O lab Review
Software defined storage on a budget with Lenovo TS140
What this all means and wrap up
In this day and age of software defined focus, remember to double-check how your hardware BIOS (e.g. software) is defined for supporting various software defined server, storage, I/O and networking software for cloud, virtual, container and legacy environments. Watch for future posts with my experiences using the Lenovo TD350 including with Windows 2012 R2 (bare metal and virtual), Ubuntu (bare metal and virtual) with various application workloads among other things.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Intel Micron 3D XPoint server storage NVM SCM PM SSD.
This is the second of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part III here.
Is this 3D XPoint marketing, manufacturing or material technology?
You can’t have a successful manufactured material technology without some marketing, likewise marketing without some manufactured material would be manufactured marketing. In the case of 3D XPoint and its announcement launch, their real technology shown, granted it was only wafer and dies as opposed to an actual DDR4 DIMM or PCIe Add In Card (AIC) or drive form factor Solid State Device (SSD) product. On the other hand, on a relative comparison basis, even though there is marketing collateral available to learn more from, this was far from a over the big-top made for TV or web circus event, which can be a good thing.
Wafer unveiled containing 3D XPoint 128 Gb dies
Who will get access to 3D XPoint?
Initially 3D XPoint production capacity supply will be for the two companies to offer early samples to their customers later this year with general production slated for 2016 meaning early real customer deployed products starting sometime in 2016.
Is it NAND or NOT?
3D XPoint is not NAND flash, it is also not NVRAM or DRAM, it’s a new class of NVM that can be used for server class main memory with persistency, or as persistent data storage among other uses (cell phones, automobiles, appliances and other electronics). In addition, 3D XPoint is more durable with a longer useful life for writing and storing data vs. NAND flash.
Why is 3D XPoint important?
As mentioned during the Intel and Micron announcement, there have only been seven major memory technologies introduced since the transistor back in 1947, granted there have been many variations along with generational enhancements of those. Thus 3D XPoint is being positioned by Intel and Micron as the eighth memory class joining its predecessors many of which continue to be used today in various roles.
Major memory classes or categories timeline
In addition to the above memory classes or categories timeline, the following shows in more detail various memory categories (click on the image below to get access to the Intel interactive infographic).
Initially the 3D XPoint technology is available in a 2 layer 128 bit (cell) per die capacity. Keep in mind that there are usually 8 bits to a byte resulting in 16 GByte capacity per chip initially. With density improvements, as well as increased stacking of layers, the number of cells or bits per die (e.g. what makes up a chip) should improve, as well as most implementations will have multiple chips in some type of configuration.
What will 3D XPoint cost?
During the 3D XPoint launch webinar Intel and Micron hinted that first pricing will be between current DRAM and NAND flash on a per cell or bit basis, however real pricing and costs will vary depending on how packaged for use. For example if placed on a DDR4 or different type of DIMM or on a PCIe Add In Card (AIC) or as a drive form factor SSD among other options will vary the real price. Likewise as with other memories and storage mediums, as production yields and volumes increase, along with denser designs, the cost per usable cell or bit can be expected to further improve.
Where to read, watch and learn more
Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) ( Part I)
Part II – Intel and Micron new 3D XPoint server and storage NVM
Part III – 3D XPoint new server storage memory from Intel and Micron
Intel and Micron ( Media Room, links, videos, images and more including B roll videos)
DRAM which has been around for sometime has plenty of life left for many applications as does NAND flash including new 3D NAND, vNAND and other variations. For the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies including 3D XPoint. Read more in this series including Part I here and Part III here.
Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Part III – 3D XPoint server storage class memory SCM
Updated 1/31/2018
3D XPoint nvm pm scm storage class memory.
This is the third of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part II here.
What is 3D XPoint and how does it work?
3D XPoint is a new class or class of memory (view other categories of memory here) that provides performance for reads and writes closer to that of DRAM with about 10x the capacity density. In addition to the speed closer to DRAM vs. the lower NAND flash, 3D XPoint is also non-volatile memory (NVM) like NAND flash, NVRAM and others. What this means is that 3D XPoint can be used as persistent higher density fast server memory (or main memory for other computers and electronics). Besides being fast persistent main memory, 3D XPoint will also be a faster medium for solid state devices (SSD’s) including PCIe Add In Cards (AIC), m2 cards and drive form factor 8637/8639 NVM Express (NVMe) accessed devices that also has better endurance or life span compared to NAND flash.
3D XPoint architecture and attributes
The initial die or basic chip building block 3D XPoint implementation is a layer 128 Gbit device which if using 8 bits would yield 16GB raw. Over time increased densities should become available as the bit density improves with more cells and further scaling of the technology, combined with packaging. For example while a current die could hold up to 16 GBytes of data, multiple dies could be packaged together to create a 32GB, 64GB, 128GB etc. or larger actual product. Think about not only where packaged flash based SSD capacities are today, also think in terms of where DDR3 and DDR4 DIMM are at such as 4GB, 8GB, 16GB, 32GB densities.
The 3D aspect comes from the memory being in a matrix initially being two layers high, with multiple rows and columns that intersect, where those intersections occur is a microscopic material based switch for accessing a particular memory cell. Unlike NAND flash where an individual cell or bit is accessed as part of a larger block or page comprising several thousand bytes at once, 3D XPoint cells or bits can be individually accessed to speed up reads and writes in a more granular fashion. It is this more granular access along with performance that will enable 3D XPoint to be used in lower latency scenarios where DRAM would normally be used.
Instead of trapping electrons in a cell to create a bit of capacity (e.g. on or off) like NAND flash, 3D XPoint leverages the underlying physical material propertied to store a bit as a phase change enabling use of all cells. In other words, instead of being electron based, it is material based. While Intel and Micron did not specify what the actual chemistry and physical materials that are used in 3D XPoint, they did discuss some of the characteristics. If you want to go deep, check out how the Dailytech makes an interesting educated speculation or thesis on the underlying technology.
Watch the following video to get a better idea and visually see how 3D XPoint works.
Left many dies on a wafer, right, a closer look at the dies cut from the wafer
Dies (here and here) are the basic building block of what goes into the chips that in turn are the components used for creating DDR DIMM for main computer memory, as well as for create SD and MicroSD cards, USB thumb drives, PCIe AIC and drive form factor SSD, as well as custom modules on motherboards, or consumption via bare die and wafer level consumption (e.g. where you are doing really custom things at volume, beyond using a soldering iron scale).
Has Intel and Micron cornered the NVM and memory market?
We have heard proclamations, speculation and statements of the demise of DRAM, NAND flash and other volatile and NVM memories for years, if not decades now. Each year there is the usual this will be the year of “x” where “x” can include among others. Resistive RAM aka ReRAM or RRAM aka the memristor that HP earlier announced they were going to bring to market and then earlier this year canceling those plans while Crossbar continues to pursue RRAM. MRAM or Magnetorestive RAM, Phase Change Memory aka CRAM or PCM and PRAM, FRAM aka FeRAM or Ferroelectric RAM among others.
Expanding persistent memory and SSD storage markets
Keep in mind that there are many steps taking time measured in years or decades to go from research and development lab idea to prototype that can then be produced at production volumes in economic yields. As a reference for, there is still plenty of life in both DRAM as well as NAND flash, the later having appeared around 1989.
Technology industry adoption precedes customer adoption and deployment
There is a difference between industry adoption and deployment vs. customer adoption and deployment, they are related, yet separated by time as shown in the above figure. What this means is that there can be several years from the time a new technology is initially introduced and when it becomes generally available. Keep in mind that NAND flash has yet to reach its full market potential despite having made significant inroads the past few years since it was introduced in 1989.
This begs the question of if 3D XPoint is a variation of phase change, RRAM, MRAM or something else. Over at the Dailytech they lay out a line of thinking (or educated speculation) that 3D XPoint is some derivative or variation of phase change, time will tell about what it really is.
What’s the difference between 3D NAND flash and 3D XPoint?
3D NAND is a form of NAND flash NVM, while 3D XPoint is a completely new and different type of NVM (e.g. its not NAND).
3D NAND is a variation of traditional flash with the difference between vertical stacking vs. horizontal to improve density, also known as vertical NAND or V-NAND. Vertical stacking is like building up to house more tenants or occupants in a dense environment or scaling up, vs scaling-out by using up more space where density is not an issue. Note that magnetic HDD’s shifted to perpendicular (e.g. vertical) recording about ten years ago to break through the super parametric barrier and more recently, magnetic tape has also adopted perpendicular recording. Also keep in mind that 3D XPoint and the earlier announced Intel and Micron 3D NAND flash are two separate classes of memory that both just happen to have 3D in their marketing names.
Where to read, watch and learn more
Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage ( Part I)
Part II – Intel and Micron new 3D XPoint server and storage NVM
Part III – 3D XPoint new server storage memory from Intel and Micron
Intel and Micron ( Media Room, links, videos, images and more including B roll videos)
First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Same with DRAM which has been around for sometime, it too still has plenty of life left for many applications. However other applications that have the need for improved speed over NAND flash, or persistency and density vs. DRAM will be some of the first to leverage new NVM technologies such as 3D XPoint. Thus at least for the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies. Bottom line, 3D XPoint is a new class of NVM memory, can be used for persistent main server memory or for persistent fast storage memory. If you have not done so, check out Part I here and Part II here of this three-part series on Intel and Micron 3D XPoint.
Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
3D XPoint NVM persistent memory PM storage class memory SCM
Updated 1/31/2018
This is the first of a three-part series on Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage announcement. Read Part II here and Part III here.
In a webcast the other day, Intel and Micron announced new 3D XPoint non-volatile memory (NVM) that can be used for both primary main memory (e.g. what’s in computers, serves, laptops, tablets and many other things) in place of Dynamic Random Access Memory (DRAM), for persistent storage faster than today’s NAND flash-based solid state devices (SSD), not to mention future hybrid usage scenarios. Note that this announcement while having the common term 3D in it is different from the earlier Intel and Micron announcement about 3D NAND flash (read more about that here).
Data needs to be close to processing, processing needs to be close to the data (locality of reference)
Server Storage I/O memory hardware and software hierarchy along with technology tiers
What did Intel and Micron announce?
Intel SVP and General Manager Non-Volatile Memory solutions group Robert Crooke (Left) and Micron CEO D. Mark Durcan did the joint announcement presentation of 3D XPoint (webinar here). What was announced is the 3D XPoint technology jointly developed and manufactured by Intel and Micron which is a new form or category of NVM that can be used for both primary memory in servers, laptops, other computers among other uses, as well as for persistent data storage.
Robert Crooke (Left) and Mark Durcan (Right)
Summary of 3D XPoint announcement
New category of NVM memory for servers and storage
Joint development and manufacturing by Intel and Micron in Utah
Non volatile so can be used for storage or persistent server main memory
Allows NVM to scale with data, storage and processors performance
Leverages capabilities of both Intel and Micron who have collaborated in the past
Performance Intel and Micron claim up to 1000x faster vs. NAND flash
Availability persistent NVM compared to DRAM with better durability (life span) vs. NAND flash
Capacity densities about 10x better vs. traditional DRAM
Economics cost per bit between dram and nand (depending on packaging of resulting products)
What applications and products is 3D XPoint suited for?
In general, 3D XPoint should be able to be used for many of the same applications and associated products that current DRAM and NAND flash-based storage memories are used for. These range from IT and cloud or managed service provider data centers based applications and services, as well as consumer focused among many others.
3D XPoint enabling various applications
In general, applications or usage scenarios along with supporting products that can benefit from 3D XPoint include among others’. Applications that need larger amounts of main memory in a denser footprint such as in-memory databases, little and big data analytics, gaming, wave form analysis for security, copyright or other detection analysis, life sciences, high performance compute and high-productivity compute, energy, video and content severing among many others.
In addition, applications that need persistent main memory for resiliency, or to cut delays and impacts for planned or un-planned maintenance or having to wait for memories and caches to be warmed or re-populated after a server boot (or re-boot). 3D XPoint will also be useful for those applications that need faster read and write performance compared to current generations NAND flash for data storage. This means both existing and emerging applications as well as some that do not yet exist will benefit from 3D XPoint over time, like how today’s applications and others have benefited from DRAM used in Dual Inline Memory Module (DIMM) and NAND flash advances over the past several decades.
Where to read, watch and learn more
Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) ( Part I)
Part II – Intel and Micron new 3D XPoint server and storage NVM
Part III – 3D XPoint new server storage memory from Intel and Micron
Intel and Micron (Media Room, links, videos, images and more including B roll videos)
First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Continue reading Part II here and Part III here of this three-part series on Intel and Micron 3D XPoint along with more analysis and commentary.
Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Hello and welcome to this July 2015 Server StorageIO update newsletter. Its mid summer here in the northern hemisphere which for many means vacations or holidays.
Content Solution Platforms
Thus this months newsletter has a focus on content solution platforms including hardware and software that get defined to support various applications. Content solutions span from video (4K, HD and legacy streaming, pre-/post-production and editing), audio, imaging (photo, seismic, energy, healthcare, etc.) to security surveillance (including Intelligent Video Surveillance [ISV] as well as Intelligence Surveillance and Reconnaissance [ISR]).
An industry and customer trend is leveraging converged platforms based on multi-socket processors with dozens of cores and threads (logical processors) to support parallel or high-concurrent threaded content based applications.
Recently I had the opportunity by Servers Direct to get some hands-on test time with one of their 2U Content Solution platforms. In addition to big fast data, other content solution applications include: content distribution network (CDN) content caching, network function virtualization (NFV), software-defined network (SDN), cloud rich unstructured big fast media data, analytics and little data (e.g. SQL and NoSQL database, key-value stores, repositories and meta-data) among others.
View other Server StorageIO lab review reports here
Closing Comments
Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.
Cheers gs
Greg Schulz – @StorageIO
Microsoft MVP File System Storage VMware vExpert
In This Issue
Industry Trends News
Commentary in the news
Tips and Articles
StorageIOblog posts
Server StorageIO Lab reviews
Events and Webinars
Resources and Links
StorageIO Commentary in the news
Recent Server StorageIO commentary and industry trends perspectives about news, activities and announcements.
Processor: A Look At Object-Based Storage Processor: Newest and best server trends PowerMore: Flash not just for performance SearchVirtualStorage: Containers and storage BizTechMagazine: Simplify with virtualization EnterpriseStorageForum: Future DR Storage EnterpriseStorageForum: 10 Tips for DRaaS EnterpriseStorageForum: NVMe planning
A common question I am asked is, “What is the best storage technology?” My routine answer is, “It depends!” During my recent Interop Las Vegas session “Smart Shopping for Your Storage Strategy” I addressed this very question. Read more in my tip Selecting Storage: Start With Requirements over at Network Computing.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(TM) and UnlimitedIO All Rights Reserved
Hello and welcome to this joint May and June 2015 Server StorageIO update newsletter. Here in the northern hemisphere its summer which means holiday vacations among other things.
There has been a lot going on this spring and so far this summer with more in the wings. Summer can also be a time to get caught up on some things, preparing for others while hopefully being able to enjoy some time off as well.
In terms of what have I been working on (or with)? Clouds (OpenStack, vCloud Air, AWS, Azure, GCS among others), virtual and containers, flash SSD devices (drives, cards), software defining, content servers, NVMe, databases, data protection items, servers, cache and micro-tiering among other things.
Speaking of getting caught up, back in early May among many other conferences (Cisco, Docker, HP, IBM, OpenStack, Red Hat and many other events) was EMCworld. EMC covered my hotel and registration costs to attend the event in Las Vegas (thanks EMC, that’s a disclosure btw ;). View a summary StorageIOblog post covering EMCworld 2015 here along with recent EMC announcements including Acquisition of cloud services vendor Virtustream for $1.2B, and ECS 2.0.
Server and Storage I/O Wrappings
This months newsletter has a focus on software and storage wrappings, that is, how your storage or software is packaged, delivered or deployed. For example traditional physical storage systems, software defined storage as shrink-wrap or download, tin-wrapped software as an appliance, virtual wrapped such as a virtual storage appliance or cloud wrapped among others.
OpenStack software defined cloud
OpenStack (both the organization, community, event and software) continue to gain momentum. The latest release known as Kilo (more Kilo info here) was released in early April followed by the OpenStack summit in May.
Some of you might be more involved with OpenStack vs. others, perhaps having already deployed into your production environment. Perhaps you, like myself have OpenStack running in a lab for proof of concept, research, development or learning among other things.
You might even be using the services of a public cloud or managed service provider that is powered by OpenStack. On the other hand, you might be familiar with OpenStack from reading up on it, watching videos, listening to podcast’s or attending events to figure out what it is, where it fits, as well as what can your organization use it for.
Drew Robb (@Robbdrew) has a good overview piece about OpenStack and storage over at Enterprise Storage Forum (here). OpenStack is a collection of tools or bundles for building private, hybrid and public clouds. These various open source projects within the OpenStack umbrella include compute (Nova) and virtual machine images (Glance). Other components include dashboard management (Horizon), security and identity control (Keystone), network (Neutron), object storage (Swift), block storage (Cinder) and file-based storage (Manila) among others.
It’s up to the user to decide which pieces you will add. For example, you can use Swift without having virtual machines and vice versa. Read Drew’s complete article here.
This is part of an ongoing series of posts that part of www.storageioblog.com/data-protection-diaries-main/ on data protection including archiving, backup/restore, business continuance (BC), business resiliency (BC), data footprint reduction (DFR), disaster recovery (DR), High Availability (HA) along with related themes, tools, technologies, techniques, trends and strategies.
Data protection is a broad topic that spans from logical and physical security to HA, BC, BR, DR, archiving(including life beyond compliance) along with various tools, technologies, techniques. Key is aligning those to the needs of the business or organization for today’s as well as tomorrows requirements. Instead of doing things what has been done in the past that may have been based on what was known or possible due to technology capabilities, why not start using new and old things in new ways.
Let’s start using all the tools in the data protection toolbox regardless of if they are new or old, cloud, virtual, physical, software defined product or service in new ways while keeping the requirements of the business in focus. Read more from this post here.
Recent Server StorageIO commentary and industry trends perspectives about news, activities and announcements.
BizTechMagazine: Comments on how to simplify your data center with virtualization EnterpriseStorageForum: Comments on Open Stack and Clouds EnterpriseStorageForum: Comments on Top Ten Software Defined Storage Tips, Gotchas and Cautions EdTech: Comments on Harness Power with New Processors Processor: Comments on Protecting Your Servers & Networking equipment EdTech: Comments on Harness Power with New Processors
Processor: Comments on Improve Remote Server Management including KVM CyberTrend: Comments on Software Defined Data Center and virtualization BizTechMagazine: Businesses Prepare as End-of-Life for Windows Server 2003 Nears InformationWeek: Top 10 sessions from Interop Las Vegas 2015 CyberTrend: Comments on Software Defined Data Center and Virtualization
This is a new section starting in this issue where various new or existing vendors as well as service providers you may not have heard about will be listed.
CloudHQ – Cloud management tools EMCcode Rex-Ray – Container management Enmotus FUZE – Flash leveraged micro tiering Rubrik – Data protection management Sureline – Data protection management Virtunet systems – VMware flash cache software InfiniteIO – Cloud and NAS cache appliance Servers Direct – Server and storage platforms
Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here. There are over 1,000 entries (and growing) vendors on the links page.
StorageIO Tips and Articles
So you have a new storage device or system. How will you test or find its performance? Check out this quick-read tip on storage benchmark and testing fundamentals over at BizTech.
BrightTalk Webinar – June 23 2015 9AM PT Server Storage I/O Innovation v2.015: Protect Preserve & Serve Your Information
From StorageIO Labs
Research, Reviews and Reports
VMware vCloud Air Test Drive
local and distributed NAS (NFS, CIFS, DFS) file data. Read more here.
VMware vCloud Air provides a platform similar to those just mentioned among others for your applications and their underlying resource needs (compute, memory, storage, networking) to be fulfilled. In addition, it should not be a surprise that VMware vCloud Air shares many common themes, philosophies and user experiences with the traditional on-premises based VMware solutions you might be familiar with.
Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
EMCworld 2015 How Do You Want Your Storage Wrapped?
Back in early May I was invited by EMC to attend EMCworld 2015 which included both the public sessions, as well as several NDA based discussions. Keep in mind that there is the known, there is the unknown (or assumed or speculated) and in between there are NDA’s, nuff said on that. EMC covered my hotel and registration costs to attend the event in Las Vegas (thanks EMC, that’s a disclosure btw ;) and here is a synopsis of various EMCworld 2015 announcements.
What EMC announced
VMAX3 enhancements to the EMC enterprise flagship storage platform to keep it relevant for traditional legacy workloads as well as for in a converged, scale-out, cloud, virtual and software defined environment.
VNX 3200 entry-level All Flash Array (AFA) flash SSD system starting at $25,000 USD for a 3TB unified platform with full data services found in other VNX products.
vVNX aka Virtual VNX aka "project liberty" which is a community (e.g. free) software version of the VNX. vVNX is a Virtual Storage Appliance (VSA) that you download and run on a VMware platform. Learn more and download here. Note the install will do a CPU type check so forget about trying to run it on a Intel Nuc or similar, I tried just because I could, the install will protect you from doing such things.
Various data protection related items including new Datadomain platforms as well as software updates and integration with other EMC platforms (storage systems).
All Flash Array (AFA) XtremIO 4.0 enhancements including larger clusters, larger nodes to boost performance, capacity and availability, along with copy service updates among others improvements.
Preview of DSSD shared (inside a rack) external flash Solid State Device (SSD) including more details. While much of DSSD is still under NDA, EMC did provide more public details at EMCworld. Between what was displayed and announced publicly at EMCworld as well as what can be found via Google (or other searches) you can piece together more of the DSSD story. What is known publicly today is that DSSD leverages the new Non-Volatile Memory express (NVMe) access protocol built upon underlying PCIe technology. More on DSSD in future discussions,if you have not done so, get an NDA deep dive briefing on it from EMC.
ScaleIO is now available via a free download here including both Windows and Linux clients as well as instructions for those operating systems as well as VMware.
ViPR can also be downloaded here for free (has been previously available) from here as well as it has been placed into open source by EMC.
What EMC announced since EMCworld 2015
Acquisition of cloud services (and software tools) vendor Virtustream for $1.2B adding to the federation cloud services portfolio (companion to VMware vCloud Air).
Release of ECS 2.0 including a free download here. This new version of ECS (Elastic Cloud Storage) can be used independent of the ViPR controller, or in conjunction with ViPR. In addition ECS now has about 80% of the functionality of the Centera object storage platform. The remaining 20% functionality (mainly regulatory compliance governance) of Centera will be added to ECS in the future providing a migration path for Centera customers. In case you are wondering what does EMC do with Centera, Atmos, ViPR and now ECS, answer is that ECS can work with or without ViPR, second is that the functionality of Centera, Atmos are being rolled into ECS. ECS as a refresher is software that transforms general purpose industry standard servers with direct storage into a scale-out HDFS and object storage solution.
Check out EMCcode including S3motion that I use and have reviewed here. Also check out EMCcode Rex-Ray which if you are into docker containers, it should be of interest, I know I’m interested in it.
What this all means and wrap-up
There were no single major explosive announcements however the sum of all the announcements together should not be over shadowed by the big tent made for TV (or web) big tent productions and entertainment. What EMC announced was effectively how would you like, how do you want and need your storage and associated data services along with management wrapped.
By being wrapped, do you want your software defined storage management and storage wrapped in a legacy turnkey solution such as VMAX3, VNX or Isilon, do you want or need it to be hybrid or all flash, converged and unified, block, file or object.
Or do you need or want the software defined storage management and storage to be "shrink wrapped" as a download so you can deploy on your own hardware "tin wrapped" or as a VSA "virtual wrapped" or cloud wrapped? Do you need or want the software defined storage management and storage to leverage anybody’s hardware while being open source?
How do you need or want your storage to be wrapped to fit your specific needs, that IMHO was the essence of what EMC announced at EMCworld 2015, granted the motorcycles and other production entertainment was engaging as well as educational.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
VMware vCloud Air Server StorageIOlab Test Drive with videos
Recently I was invited by VMware vCloud Air to do a free hands-on test drive of their actual production environment. Some of you may already being using VMware vSphere, vRealize and other software defined data center (SDDC) aka Virtual Server Infrastructure (VSI) or Virtual Desktop Infrastructure (VDI) tools among others. Likewise some of you may already be using one of the many cloud compute or Infrastructure as a Service (IaaS) such as Amazon Web Services (AWS) Elastic Cloud Compute (EC2), Centurylink, Google Cloud, IBM Softlayer, Microsoft Azure, Rackspace or Virtustream (being bought by EMC) among many others.
VMware vCloud Air provides a platform similar to those just mentioned among others for your applications and their underlying resource needs (compute, memory, storage, networking) to be fulfilled. In addition, it should not be a surprise that VMware vCloud Air shares many common themes, philosophies and user experiences with the traditional on-premises based VMware solutions you may be familiar with.
You can give VMware vCloud Air a trial for free while the offer lasts by clicking here (service details here). Basically if you click on the link and register a new account for using VMware vCloud Air they will give you up to $500 USD in service credits to use in the real production environment while the offer lasts which iirc is through end of June 2015.
Click on above image to view video part I
Click on above image to view video part II
What this means is that you can go and setup some servers with as many CPUs or cores, memory, Hard Disk Drive (HDD) or flash Solid State Devices (SSD) storage, external IP networks using various operating systems (Centos, Ubuntu, Windows 2008, 20012, 20012 R2) for free, or until you use up the service credits.
Speaking of which, let me give you a bit of a tip or hint, even though you can get free time, if you provision a fast server with lots of fast SSD storage and leave it sit idle over night or over a weekend, you will chew up your free credits rather fast. So the tip which should be common sense is if you are going to do some proof of concepts and then leave things alone for a while, power the virtual cloud servers off to stretch your credits further. On the other hand, if you have something that you want to run on a fast server with fast storage over a weekend or longer, give that a try, just pay attention to your resource usage and possible charges should you exhaust your service credits.
My Server StorageIO test drive mission objective
For my test drive, I created a new account by using the above link to get the service credits. Note that you can use your regular VMware account with vCloud Air, however you wont get the free service credits. So while it is a few minutes of extra work, the benefit was worth it vs. simply using my existing VMware account and racking up more cloud services charges on my credit card. As part of this Server StorageIOlab test drive, I created two companion videos part I here and part II here that you can view to follow along and get a better idea of how vCloud works.
Phase one, create the virtual data center, database server, client servers and first setup
My goal was to set up a simple Virtual Data Center (VDC) that would consist of five Windows 2012 R2 servers, one would be a MySQL database server with the other four being client application servers. You can download MySQL from here at Oracle as well as via other sources. For applications to simplify things I used Hammerdb as well as Benchmark Factory that is part of the Quest Toad tool set for database admins. You can download a free trial copy of Benchmark Factory here, and HammerDB here. Another tool that I used for monitoring the servers is Spotlight on Windows (SoW) which is also free here. Speaking of tools, here is a link to various server and storage I/O performance as well as monitoring tools.
Links to tools that I used for this test-drive included:
Recap of what was done in phase one, watch the associated video here.
After the initial setup (e.g. part I video here), the next step was to add some more virtual machines and take a closer look at the environment. Note that most of the work in setting up this environment was Windows, MySQL, Hammerdb, Benchmark Factory, Spotlight on Windows along with other common tools so their installation is not a focus in these videos or this post, perhaps a future post will dig into those in more depth.
What was done during phase II (view the video here)
There is much more to VMware vCloud Air and on their main site there are many useful links including overviews, how-too tutorials, product and service offering details and much more here. Besides paying attention to your resource usage and avoid being surprised by service charges, two other tips I can pass along that are also mentioned in the videos (here and here) is to pay attention what region you setup your virtual data centers in, second is have your network thought out ahead of time to streamline setting up the NAT and firewall as well as gateway configurations.
Where to learn more
Learn more about data protection and related topics, themes, trends, tools and technologies via the following links:
VMware vCloud Air home page including while the offer lasts complimentary service credits
What’s most important to know about my cloud privacy policy?
What this all means and wrap-up
Overall I like the VMware vCloud Air service which if you are VMware centric focused will be a familiar cloud option including integration with vCloud Director and other tools you may already have in your environment. Even if you are not familiar with VMware vSphere and associated vRealize tools, the vCloud service is intuitive enough that you can be productive fairly quickly. On one hand vCloud Air does not have the extensive menu of service offerings to choose from such as with AWS, Google, Azure or others, however that also means a simpler menu of options to choose from and simplify things.
I had wanted to spend some time actually using vCloud and the offer to use some free service credits in the production environment made it worth making the time to actually setup some workloads and do some testing. Even if you are not a VMware focused environment, I would recommend giving VMware vCloud Air a test drive to see what it can do for you, as opposed to what you can do for it…
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved