Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage

3D XPoint NVM persistent memory PM storage class memory SCM


Storage I/O trends

Updated 1/31/2018

This is the first of a three-part series on Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage announcement. Read Part II here and Part III here.

In a webcast the other day, Intel and Micron announced new 3D XPoint non-volatile memory (NVM) that can be used for both primary main memory (e.g. what’s in computers, serves, laptops, tablets and many other things) in place of Dynamic Random Access Memory (DRAM), for persistent storage faster than today’s NAND flash-based solid state devices (SSD), not to mention future hybrid usage scenarios. Note that this announcement while having the common term 3D in it is different from the earlier Intel and Micron announcement about 3D NAND flash (read more about that here).

Twitter hash tag #3DXpoint

The big picture, why this type of NVM technology is needed

Server and Storage I/O trends

  • Memory is storage and storage is persistent memory
  • No such thing as a data or information recession, more data being create, processed and stored
  • Increased demand is also driving density along with convergence across server storage I/O resources
  • Larger amounts of data needing to be processed faster (large amounts of little data and big fast data)
  • Fast applications need more and faster processors, memory along with I/O interfaces
  • The best server or storage I/O is the one you do not need to do
  • The second best I/O is one with least impact or overhead
  • Data needs to be close to processing, processing needs to be close to the data (locality of reference)


Server Storage I/O memory hardware and software hierarchy along with technology tiers

What did Intel and Micron announce?

Intel SVP and General Manager Non-Volatile Memory solutions group Robert Crooke (Left) and Micron CEO D. Mark Durcan did the joint announcement presentation of 3D XPoint (webinar here). What was announced is the 3D XPoint technology jointly developed and manufactured by Intel and Micron which is a new form or category of NVM that can be used for both primary memory in servers, laptops, other computers among other uses, as well as for persistent data storage.


Robert Crooke (Left) and Mark Durcan (Right)

Summary of 3D XPoint announcement

  • New category of NVM memory for servers and storage
  • Joint development and manufacturing by Intel and Micron in Utah
  • Non volatile so can be used for storage or persistent server main memory
  • Allows NVM to scale with data, storage and processors performance
  • Leverages capabilities of both Intel and Micron who have collaborated in the past
  • Performance Intel and Micron claim up to 1000x faster vs. NAND flash
  • Availability persistent NVM compared to DRAM with better durability (life span) vs. NAND flash
  • Capacity densities about 10x better vs. traditional DRAM
  • Economics cost per bit between dram and nand (depending on packaging of resulting products)

What applications and products is 3D XPoint suited for?

In general, 3D XPoint should be able to be used for many of the same applications and associated products that current DRAM and NAND flash-based storage memories are used for. These range from IT and cloud or managed service provider data centers based applications and services, as well as consumer focused among many others.


3D XPoint enabling various applications

In general, applications or usage scenarios along with supporting products that can benefit from 3D XPoint include among others’. Applications that need larger amounts of main memory in a denser footprint such as in-memory databases, little and big data analytics, gaming, wave form analysis for security, copyright or other detection analysis, life sciences, high performance compute and high-productivity compute, energy, video and content severing among many others.

In addition, applications that need persistent main memory for resiliency, or to cut delays and impacts for planned or un-planned maintenance or having to wait for memories and caches to be warmed or re-populated after a server boot (or re-boot). 3D XPoint will also be useful for those applications that need faster read and write performance compared to current generations NAND flash for data storage. This means both existing and emerging applications as well as some that do not yet exist will benefit from 3D XPoint over time, like how today’s applications and others have benefited from DRAM used in Dual Inline Memory Module (DIMM) and NAND flash advances over the past several decades.

Where to read, watch and learn more

Storage I/O trends

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Continue reading Part II here and Part III here of this three-part series on Intel and Micron 3D XPoint along with more analysis and commentary.

Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Server storage I/O Intel NUC nick knack notes – First impressions

Storage I/O trends

Server storage I/O Intel NUC nick knack notes – First impressions

This is the first of a two-part (part II here) series of my experiences (and impressions) using an Intel NUC ( a 4th generation model) for various things about cloud, virtual, physical and software defined server storage I/O networking.

The NUC has been around new for a few years and continues to evolve and recently I bought my first one (e.g. a 4th generation model) to join some other servers that I have. My reason for getting a NUC is to use it as a simple low-power platform to run different software on including bare-metal OS, hypervisors, cloud, virtual and software defined server storage and networking applications on that might otherwise be on an old laptop or mini-tower.

Intel® NUC with Intel® Core™ i5 Processor and 2.5-Inch Drive Support (NUC5i5RYH) via Intel.com

Introducing Intel Next Unit Computing aka NUC

For those not familiar, NUC is a series of products from Intel called Next Unit Computing that offer an alternative to traditional mini-desktop or even laptop and notebooks. There are several different NUC models available including the newer 5th generation models (click here to see various models and generations). The NUCs are simple, small units of computing with an Intel processor and room for your choice of memory, persistent storage (e.g. Hard Disk Drive (HDD) or flash Solid State Device (SSD), networking, video, audio and other peripheral device attachment.

software (not supplied) is defined by what you choose to use such as a Windows or *nix operating system, VMware ESXi, Microsoft Hyper-V, KVM or Xen hypervisor, or some other applications. The base NUC package includes front and rear-side ports for attaching various devices. In terms of functionality, think of a laptop without a keyboard or video screen, or in terms of a small head-less (e.g. no monitor) mini-tower desktop workstation PC.

Which NUC to buy?

If you need to be the first with anything new, then jump direct to the recently released 5th generation models.

On the other hand, if you are looking for a bargain, there are some good deals on 4th generation or older. likewise depending on your processor speed and features needed along with available budget, those criteria and others will direct you to a specific NUC model.

I went with a 4th generation NUC realizing that the newer models were just around the corner as I figured could always get another (e.g. create a NUC cluster) newer model when needed. In addition I also wanted a model that had enough performance to last a few years of use and the flexibility to be reconfigured as needed. My choice was a model D54250WYK priced around $352 USD via Amazon (prices may vary by different venues).

Whats included with a NUC?

My first NUC is a model D54250WYK (e.g. BOXD54250WYKH1 ) that you can view the specific speeds and feeds here at the Intel site along with ordering info here at Amazon (or your other preferred venue).

View and compare other NUC models at the Intel NUC site here.

The following images show the front-side two USB 3.0 ports along with head-phone (or speaker) and microphone jacks. Looking at the rear-side of the NUC there are a couple of air vents, power connector port (external power supply), mini-display and HDMI video port, GbE LAN, and two USB 3.0 ports.

NUC front viewRear ports of NUC
Left is front view of my NUC model 54250 and Right is back or rear view of NUC

NUC ModelBOXD54250WYKH1 (speeds/feeds vary by specific model)
Form factor1.95" tall
ProcessorIntel Core i5-4250U with active heat sink fan
MemoryTwo SO-DIMM DDR3L (e.g. laptop) memory, up to 16GB (e.g. 2x8GB)
DisplayOne mini DisplayPort with audio
One mini HDMI port with audio
AudioIntel HD Audio, 8 channel (7.1) digital audio via HDMI and DisplayPort, also headphone jack
LANIntel Gigabit Ethernet (GbE) (I218)
Peripheral and storageTwo USB 3.0 (e.g. blue) front side
Two USB 3.0 rear side
Two USB 2.0 (internal)

One SATA port (internal 2.5 inch drive bay)

Consumer infrared sensor (front panel)
ExpansionOne full-length mini PCI Express slot with mSATA support
One half-length mini PCI Express slot
Included in the boxLaptop style 19V 65W power adapter (brick) and cord, VESA mounting bracket (e.g. for mounting on rear of video monitor), integration (installation) guide, wireless antennae (integrated into chassis), Intel Core i5 logo
Warranty3-year limited

Processor Speeds and Feeds

There are various Intel Core i3 and i5 processors available depending on specific NUC model, such as my 54250WYK has a two core (1.3Ghz each) 4th generation i5-4250U (click here to see Intel speeds and feeds) which includes Intel Visual BIOS, Turbo Boost, Rapid Start and virtualization support among other features.

Note that features vary by processor type, along with other software, firmware or BIOS updates. While the 1.3Ghz two core (e.g. max 2.6Ghz) is not as robust as faster quad (or more) cores running at 3.0Ghz (or faster), for most applications including as a first virtual lab or storage sand box among other uses, it will be fast enough or comparable to a lower-mid range laptop capabilities.

What this all means

In general I like the NUC so much that I bought one (model 54250) and would consider adding another in the future for somethings, however also see the need to continue using my other compute servers for different workloads.

This wraps up part I of this two-part series and what this means is that I like the idea of a Intel NUC I bought one. Continue reading in part-two here where I cover the options that I added to my NUC, initial configuration, deployment, use and additional impressions.

Ok, nuff said for now, check out part-two here.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 StorageIO and UnlimitedIO LLC All Rights Reserved

Server storage I/O Intel NUC nick knack notes – Second impressions

Storage I/O trends

Server storage I/O Intel NUC nick knack notes – Second impressions

This is the second of a two-part series about my first and second impressions of the Intel NUC (Next Unit Computing). In the first post (here) I give an overview and my first impressions while in this post lets look at options added to my NUC model 54250, first deployment use and more impressions.

Intel® NUC with Intel® Core™ i5 Processor and 2.5-Inch Drive Support (NUC5i5RYH) via Intel.com

What you will want to add to a NUC

Since the NUC is a basic brick with a processor mounted on its mother board, you will need to add memory, some type of persistent storage device (mSATA, SATA or USB based) and optionally a WiFi card.

One of the nice things about the NUC is that in many ways it is the equivalent functionality of a laptop or mini-tower without the extra overhead (cost, components, packaging) enabling you to customize as needed for your specific requirements. For example there is no keyboard, mouse, video screen, WiFi, Hard Disk Drive (HDD) or flash Solid State Device (SSD) included with an operating system pre-installed. There is no least memory required enabling you to decide how much to configure while using compatible laptop style memory. Video and monitors attach via HDMI or mini-port including VGA devices via an adapter cable. Keyboard and mouse if needed are handled via USB ports.

Here is what I added to my NUC model 5420.

1Crucial 16GB Kit (2 x 8GB) DDR3 1600 (PC3-12800) SODIMM 204-Pin Notebook Memory
1Intel Network 7260 WiFi Wireless-AC 7260 H/T Dual Band 2×2 AC+Bluetooth HMC. Here is link to Intel site for various drivers.
1500GB Samsung Electronics 840 EVO mSATA 0.85-Inch Solid State Drive
1SATA HDD, SSD or HHDD/SSHD (I used one of my existing drives)

Note that you will also need to supply some type of Keyboard Video Mouse (KVM), in my case I used a HDMI to VGA adapter cable to attach the NUC via HDMI (for video) and USB (keyboard and mouse) to my Startech KVM switch.

Following images show on the left Intel WiFi card installed and on the right, a Samsung 840 EVO mSATA 500GB flash SSD installed above the WiFi card. Also notice on the far right of the images the two DDR3 "notebook" class DRAM DIMM slots.

NUC WiFi cardmSATA SSD
Left: Intel WiFi card installed and Right Samsung EVO mSATA SSD card (sits above WiFi card)

Note that the NUC (as do many laptops) accepts 9mm or smaller thin 7mm height HDDs and SSDs in its SATA drive bay. I mention this because some of the higher-capacity 2TB 2.5" SFF drives are taller than 9m as shown in the above image and do not fit in the NUC internal SATA drive bay. While many devices and systems support 2.5" drive slots for HDD, SSD or HHDD/SSHDs, pay attention to the height and avoid surprises when something does not fit like it was assumed to.

2.5 HDD and SSDs
Low-profile and tall-profile 2.5" SFF HDDs

Additional drives and devices can be attached using external USB 3.0 ports including HDDs, SSDs or even USB to GbE adapters if needed. You will need to supply your own operating system, hypervisor, storage, networking or other software, such as Windows, *nix, VMware ESXi, Hyper-V, KVM, Xen, OpenStack or any of the various ZFS based (among others) storage appliances.

Unpacking and physical NUC installation

Initial setup and physical configuration of the NUC is pretty quick with the only tool needed being a Philips screw driver.

NUC and components ready for installation
Intel NUC 5420 and components ready for installation

With all the components including the NUC itself laid out for a quick inventory including recording serial numbers (see image above), the next step is to open up the NUC by removing four Philip screws from the bottom. Once the screws are removed and bottom plate removed, the SATA drive bay opens up to reach the slots of memory, mSATA SSD and WiFi card (see images below). Once the memory, mSATA and WiFi cards are installed, the SATA drive bay coverage those components and it is time to install a 2.5" standard height HDD or SSD. For my first deployment I installed temporarily installed on of my older HHDDs a 750GB Seagate Momentus XT that will be replaced by something newer soon.

NUC internal HDD/SSD slotNUC internal HDD installed
View of NUC with bottom cover removed, Left empty SATA drive bay, Right HDD installed

After the components are installed, it is time to replace the bottom cover plate of the NUC securing in place with the four screws previously removed. Next up is attaching any external devices via USB and other ports including KVM and LAN network connection. Once the hardware is ready, its time to power up the NUC and checkout the Visual BIOS (or UEFI) as shown below.

Intel NUC Visual BIOSIntel NUC Visual BIOS display
NUC VisualBIOS screen shot examples

At this point unless you have already installed an operating system, hypervisor or other software on a HDD, SSD or USB device, it is time to install your prefered software.

Windows 7

First up was Windows 7 as I already had an image built on the HHDD that required some drivers to be added. specifically, a visit to the Intel resources site (See NUC resources and links section later in this post) was made to get a LAN GbE, WiFi and USB drivers. Once those were installed the on-board GbE LAN port worked good as did the WiFi. Another driver that needed to be download was for a USB-GbE adapter to add another LAN connection. Also a couple of reboots were required for other Windows drivers and configuration changes to take place to correct some transient problems including KVM hangs which eventually cleared themselves up.

Windows 2012 R2

Following Windows 7, next up was a clean install of Windows 2012 R2 which also required some drivers and configuration changes. One of the challenges is that Windows 2012 R2 is not officially supported on the NUC with its GbE LAN and WiFi cards. However after doing some searches and reading a few posts including this and this, a solution was found and Windows 2012 R2 and its networking are working good.

Ubuntu and Clonezilla

Next up was a quick install of Ubuntu 14.04 which went pretty smooth, as well as using Clonezilla to do some drive maintenance, move images and partitions among other things.

VMware ESXi 5.5U2

My first attempt at installing a standard VMware ESXi 5.5U2 image ran into problems due to the GbE LAN port not being seen. The solution is to use a different build, or custom ISO that includes the applicable GbE LAN driver (e.g. net-e1000e-2.3.2.x86_64.vib) and some useful information at Florian Grehl site (@virten) and over at Andreas Peetz site (@VFrontDe) including SATA controller driver for xahci. Once the GbE driver was added (same driver that addresses other Intel NIC I217/I218 based systems) along with updating the SATA driver, VMware worked fine.

Needless to say there are many other things I plan on doing with the NUC both as a standalone bare-metal system as well as a virtual platform as I get more time and projects allow.

What about building your NUC alternative?

In addition to the NUC models available via Intel and its partners and accessorizing as needed, there are also special customized and ruggedized NUC versions similar to what you would expect to find with laptop, notebooks, and other PC based systems.

MSI Probox rear viewMSI Probox front view
Left MSI ProBox rear-view Right MSI ProBox front view

If you are looking to do more than what Intel and its partners offer, then there are some other options such as to increase the number of external ports among other capabilities. One option which I recently added to my collection of systems is an DIY (Do It Yourself) MSI ProBox (VESA mountable) such as this one here.

MSI Probox internal view
Internal view MSI ProBox (no memory, processor or disks)

With the MSI ProBox, they are essentially a motherboard with an empty single cpu socket (e.g. LGA 1150 up to 65W) for supporting various processors, two empty DDR3 DIMM slots, 2 empty 2.5" SATA ports among other capabilities. Enclosures such as the MSI ProBox give you flexibility creating something more robust beyond a basic NUC yet smaller than a traditional server depending on your specific needs.

Looking for other small form factor modular and ruggedized server options as an alternative to a NUC, than check out those from Xi3, Advantech, Cadian Networks, and Logic Supply among many others.

Storage I/O trends

First NUC impressions

Overall I like the NUC and see many uses for it from consumer, home including entertainment and media systems, video security surveillance as well as a small server or workstation device. In addition, I can see a NUC being used for smaller environments as desktop workstations or as a lower-power, lower performance system including as a small virtualization host for SOHO, small SMB and ROBO environments. Another usage is for home virtual lab as well as gaming among other scenarios including simple software defined storage proof of concepts. For example, how about creating a small cluster of NUCs to run VMware VSAN, or Datacore, EMC ScaleIO, Starwind, Microsoft SOFS or Hyper-V as well as any of the many ZFS based NAS storage software applications.

Pro’s – Features and benefits

Small, low-power, self-contained with flexibility to choose my memory, WiFi, storage (HDD or SSD) without the extra cost of those items or software being included.

Con’s – Caveats or what to look out for

Would be nice to have another GbE LAN port however I addressed that by adding a USB 3.0 to GbE cable, likewise would be nice if the 2.5" SATA drive bay supported tall height form-factor devices such as the 2TB devices. The work around for adding larger capacity and physically larger storage devices is to use the USB 3.0 ports. The biggest warning is if you are going to venture outside of the official supported operating system and application software realm be ready to load some drivers, possibly patch and hack some install scripts and then plug and pray it all works. So far I have not run into any major show stoppers that were not addressed with some time spent searching (google will be your friend), then loading the drivers or making configuration changes.

Additional NUC resources and links

Various Intel products support search page
Intel NUC support and download links
Intel NUC model 54250 page, product brief page (and PDF version), and support with download links
Intel NUC home theater solutions guide (PDF)
Intel HCL for NUC page and Intel Core i5-4250U processor speeds and feeds
VMware on NUC tips
VMware ESXi driver for LAN net-e1000e-2.3.2.x86_64
VMware ESXi SATA xahci driver
Server storage I/O Intel NUC nick knack notes – First impressions
Server Storage I/O Cables Connectors Chargers & other Geek Gifts (Part I and Part II)
Software defined storage on a budget with Lenovo TS140

Storage I/O trends

What this all means

Intel NUC provides a good option for many situations that might otherwise need a larger mini-tower desktop workstations or similar systems both for home, consumer and small office needs. NUC can also be used for specialized pre-configured application specific situations that need low-power, basic system functionality and expansion options in a small physical footprint. In addition NUC can also be a good option for adding to an existing physical and virtual LAB or as a basis for starting a new one.

So far I have found many uses for NUC which free up other systems to do other tasks while enabling some older devices to finally be retired. On the other hand like most any technology, while the NUC is flexible, its low power and performance are not enough to support other applications. However the NUC gives me flexibility to leverage the applicable unit of compute (e.g. server, workstation, etc.) that is applicable to a given task or put another way, use the right technology tool for the task at hand.

For now I only need a single NUC to be a companion to my other HP, Dell and Lenovo servers as well as MSI ProBox, however maybe there will be a small NUC cluster, grid or ring configured down the road.

What say you, do you have a NUC if so, how is it being used and tips, tricks or hints to share with others?

Ok, nuff said for now.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 StorageIO and UnlimitedIO LLC All Rights Reserved

Lenovo buys IBM’s xSeries aka x86 server business, what about EMC?

Storage I/O trends

Lenovo buys IBM’s xSeries x86 server business for $2.3B USD, what about EMC?

Once again Lenovo is new owner of some IBM computer technology, this time by acquiring the x86 (e.g. xSeries) server business unit from big blue. Today Lenovo announced its plan to acquire the IBM x86 server storage business unit for $2.3B USD.

Research Triangle Park, North Carolina, and Armonk, New York – January 23, 2014

Lenovo (HKSE: 992) (ADR: LNVGY) and IBM (NYSE: IBM) have entered into a definitive agreement in which Lenovo plans to acquire IBM’s x86 server business. This includes System x, BladeCenter and Flex System blade servers and switches, x86-based Flex integrated systems, NeXtScale and iDataPlex servers and associated software, blade networking and maintenance operations. The purchase price is approximately US$2.3 billion, approximately two billion of which will be paid in cash and the balance in Lenovo stock.

IBM will retain its System z mainframes, Power Systems, Storage Systems, Power-based Flex servers, and PureApplication and PureData appliances.

Read more here

If you recall (or didn’t’t know) around a decade or so ago IBM also spun off its Laptop (e.g. Thinkpads) and workstation business unit to Lenovo after being one of the early PC players (I still have a model XT in my collection along with Mac SE and Newton).

What this means for IBM?

What this means is that IBM is selling off a portion of its systems technology group which is where the servers, storage and related hardware, software technologies report into. Note however that IBM is not selling off its entire server portfolio, only the x86 e.g. Intel/AMD based products that make up the xSeries as well as companion Blade and related systems. This means that IBM is retaining its Power based systems (and processors) that include the pSeries, iSeries and of course the zSeries mainframes  in addition to the storage hardware/software portfolio.

However as part of this announcement, Lenovo is also licensing from IBM the Storwize/V7000 technology as well as tape summit resources, GPFS based scale out file systems used in SONAS and related products that are part of solution bundles tied to the x86 business.

Again to be clear, IBM is not selling off (or at least at this time) Storwize, tape or other technology to Lenovo other than x86 server business. By server business, this means the technology, patents, people, processes, products, sales, marketing, manufacturing, R&D along with other entities that form the business unit, not all that different from when IBM divested the workstation/laptop aka PC business in the past.

Storage I/O trends

What this means for Lenovo?

What Lenovo gets are an immediate (once the deal closes) expansion of their server portfolio including high-density systems for cloud, HPC as well as regular enterprise, not to mention also for SME and SMB. Lenovo also gets blade systems as well as converged systems (server, storage, networking, hardware, software) hence why IBM is also licensing some technology to Lenovo that it is not selling. Lenovo also gets the sales, marketing, design, support and other aspects to also expand their server business. By gaining the server business unit, Lenovo will now be in a place to take on Dell (who was also rumored to be in the market for the IBM servers), as well as HP, Oracle and other x86 system based suppliers.

What about EMC and Lenovo?

Yes, EMC, that storage company who is also a primary owner of VMware, as well as partner with Cisco and Intel in the VCE initiatives, not to mention who also entered into a partnership with Lenovo a year or so ago.

In case you forgot or didn’t’t know, EMC after breaking up with Dell, entered into a partnership with Lenovo back in 2012.

This partnership and initiatives included developing servers that in turn EMC could use for their various storage and data appliances which continue to leverage x86 type technology. In addition, that agreement found the EMC Iomega brand transitioning over into the Lenovo line-up for both domestic North America, as well as international including the chinese market. Hence I have an older Iomega IX4 that says EMC, and a newer one that says EMC/Lenovo, also note that at CES a few weeks ago, some new Iomega products were announced.

In checking with Lenovo today, they indicated that it is business as usual and no changes with or to the EMC partnership.

Via email from Lenovo spokesperson today:

A key piece to Lenovo’s Enterprise strategy has always included strong partnerships. In fact today’s announcements reinforce that strategy very clearly.

Given the new scale, footprint and Enterprise credibility that this server acquisition affords Lenovo, we see great opportunity in offering complimentary storage offerings to new and existing customers.

Lenovo’s partnership with EMC is multifaceted and stays in-tact as an important part of Lenovo’s overall strategy to offer customers compelling solutions built on world-class technology.

Lenovo will continue to offer Lenovo/EMC NAS products from our joint venture as well as resell EMC stand-alone storage platforms.

IBM Storwize storage and other products are integral to the in-scope platforms and solutions we acquired. In order to ensure continuity of business and the best customer experience we will partner with IBM for storage products as well.

We believe this is a great opportunity for all three companies, but most importantly these partnerships are in place and will remain healthy for the benefit for our customers.

Hence it is my opinion that for now it is business as usual, the IBM x8x business unit has a new home, those people will be getting new email addresses and business cards similar to how some of their associates did when the PC group was sold off a few years ago.

Otoh, there may also be new products that might become opportunities to be placed into he Lenovo EMC partnership, however that is just my speculation at this time. Likewise while there will be some groups within Lenovo focused on selling the converged Lenovo solutions coming from IBM that may in fact compete with EMC (among others) in some scenarios, that should be no more and hopefully less than what IBM has with their server groups at times competing with themselves.

Storage I/O trends

What does this mean for Cisco, Dell, HP and others?

For Cisco, instead of competing with one of their OEMs (e.g. IBM) for networking equipment (note IBM also owns some of its own networking), the server competition shifts to Lenovo who is also a Cisco partner (its called coopitition), and perhaps business as usual in many areas. For Dell, in the mid-market space, things could get interesting and the Round Rock folks need to get creative and beyond VRTX.

For HP, this is where IMHO it’s going to get really interesting as Lenovo gets things transitioned. Near-term, HP could have a disruptive upper hand, however longer-term, HP has to get their A-Game on. Oracle is in the game as are a bunch of others from Fujitsu to SuperMicro to outside of North America and in particular china there is also Huawei. Back to EMC and VCE, while I expect the Cisco partnership to stay, I also see a wild card where EMC can leverage their Lenovo partnership into more markets, while Cisco continues to move into storage and other adjacent areas (e.g. more coopitition).

What this means now and going forward?

Thus this is as much about enterprise, SME, SMB as it is HPC, cloud and high-density where the game is about volume. Likewise there is also the convergence or data infrastructure angle combing server, storage, networking hardware, software and services.

One of the things I have noticed about Lenovo as a customer using ThinkPads for over 13 years now (not the same one) is that while they are affordable, instead of simply cutting cost and quality, they seem to have found ways to remove cost which is different then simply cutting to go cheap.

Case in point about a year and a half ago I dropped my iPhone on my Lenovo X1 keyboard that is back-lit and broke a key. Calling Lenovo after trying to find a replacement key on the web, they said no worries and next morning a new keyboard for the laptop was on my doorstep by 10:30Am with instructions on how to remove the old, put in the new, and do the RMA, no questions asked (read more about this here).

The reason I mention that story about my X1 laptop is that it ties to what I’m curious and watching with their soon to be expanded new server business.

Will they go in and simply look to reduce cost by making cuts from design to manufacturing to part quality, service and support, or, find ways to remove complexity and cost while providing more value?

Now I wonder whose technology will join my HP and Dell systems to fill some empty rack space in the not so distant future to support growth?

Time will tell, congratulations to Lenovo and the IBMers who now have a new home best wishes.

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2014

Dell Inspiron 660 i660, Virtual Server Diamond in the rough?

Storage I/O trends

Dell Inspiron 660 i660, Virtual Server Diamond in the rough?

During the 2013 post thanksgiving black friday shopping day, I did some on-line buying including a Dell Inspiron 660 i660 (5629BK) to be used as a physical machine (PM) or VMware host (among other things).

Now technically I know, this is a workstation or desktop and thus not what some would consider a server, however as another PM to add to my VMware environment (or be used as a bare metal platform), it is a good companion to my other systems.

Via Dell.com Dell 660 i660

Taking a step back, needs vs. wants

Initially my plan for this other system was to go with a larger, more expensive model with as many DDR3 DIMM (memory) and PCIe x4/x8/x16 expansion slots as possible. Some of my other criteria were PCIe Gen 3, latest Intel processor generation with VT (Virtualization Technology) and Extended Page Tables (EPT) for server virtualization support without breaking my budget. Heck, I would love a Dell VRTX or some similar types of servers from the likes of Cisco, HP, IBM, Lenovo, Supermicro among many others. On the other hand, I really don’t need one of those types of systems yet, unless of course somebody wants to send some to play with (excuse me, test drive, try-out).

Hence needs are what I must have or need, while wants are those things that would be, well, nice to have.

Server shopping and selection

In the course of shopping around, looking at alternatives and having previously talked with Robert Novak (aka @gallifreyan) and he reminded me to think outside the box a bit, literally. Check out Roberts blog (aka rsts11 a great blog name btw for those of use who used to work with RSTS, RSX and others) including a post he did shortly after I had a conversation with him. If you read his post and continue through this one, you should be able to connect the dots.

While I still have a need and plans for another server with more PCIe and DDR3 (maybe wait for DDR4? ;) ) slots, I found a Dell Inspiron 660.

Candidly normally I would have skipped over this type or class of system, however what caught my eye was that while limited to only two DDR3 DIMM slots and a single PCIe x16 slot, there were three extra x1 slots which while not as robust, certainly gave me some options if I need to use those for older, slower things. Likewise leveraging higher density DIMM’s, the system is already now at 16GB RAM waiting for larger DIMM’s if needed.

VMware view of Inspiron 600

The Dell Inspiron 660-i660 I found had a price of a little over $550 (delivered) with an Intel i5-3330 processor (quad-core, quad thread 3GHz clock), PCIe Gen 3, one PCIe x16 and three PCIe x1 slots, 8GB DRAM (since reallocated), GbE port and built-in WiFi, Windows 8 (since P2V and moved into the VMware environment), keyboard and mouse, plus a 1TB 6Gb SATA drive, I could afford two, maybe three or four of these in place of a larger system (at least for now). While for something’s I have a need for a single larger server, there are other things where having multiple smaller ones with enough processing performance, VT and EPT support comes in handy (if not required for some virtual servers).

Some of the enhancements that I made were once the initial setup of the Windows system was complete, did a clone and P2V of that image, and then redeploying the 1TB SATA drive to join others in the storage pool. Thus the 1TB SATA HDD has been replaced with (for now) a 500GB Momentus XT HHDD which by time you read this could already changed to something else.

Another enhancements was bumping up the memory from 8GB to 16GB, and then adding a StarTech enclosure (See below) for more internal SAS / SATA storage (it supports both 2.5" SAS and SATA HDD’s as well as SSD’s). In addition to the on-board SATA drive port plus one being used for the CD/DVD, there are two more ports for attaching to the StarTech or other large 3.5" drives that live in the drive bay. Depending on what I’m using this system for, it has different types of adapters for external expansion or networking some of which have already included 6Gbps and 12Gbps SAS HBA’s.

What about adding more GbE ports?

As this is not a general purpose larger system with many expansion ports for PCIe slots, that is one of the downsides you get for this cost. However depending on your needs, you have some options. For example I have some Intel PCIe x1 GbE cards to give extra networking connectivity if or when needed. Note however that as these are PCIe x1 slots they are PCIe Gen 1 so from a performance perspective exercise caution when mixing these with other newer, faster cards when performance matters (more on this in the future).

Via Amazon.com Intel PCIe x1 GbE card
Via Amazon.com Intel (Gigabit CT PCI-E Network Adapter EXPI9301CTBLK)

One of the caveats to be aware of if you are going to be using VMware vSphere/ESXi is that the Realtek GbE NIC on the Dell Inspiron D600-i660 may not play well, however there are work around’s. Check out some of the work around’s over at Kendrick Coleman (@KendrickColeman) and Erik Bussink (@ErikBussink) sites both of which were very helpful and I can report that the Realtek GbE is working fine with VMware ESXi 5.5a.

Need some extra SAS and SATA internal expansion slots for HDD and SSD’s?

The StarTech 4 x 2.5″ SAS and SATA internal enclosures supports various speed SSD and HDD’s depending on what you connect the back-end connector port to. On the back of the enclosure chassis there is a connector that is a pass-thru to the SAS drive interface that also accepts SATA drives. This StarTech enclosure fits nicely into an empty 5.2″ CD/DVD expansion bay and then attach the individual drive bays to your internal motherboard SAS or SATA ports, or to those on another adapter.

Via Amazon.com StarTech 4 port SAS / SATA enclosure
Via Amazon.com StarTech 4 x 2.5" SAS and SATA internal enclosure

So far I have used these enclosures attached to various adapters at different speeds as well as with HDD, HHDD, SSHD and SSD’s at various SAS/SATA interface speeds up to 12Gbps. Note that unlike some other enclosures that have SAS or SATA expander, the drive bays in the StarTech are pass-thru hence are not regulated by the expander chip and its speed. Price for these StarTech enclosures is around $60-90 USD and are good for internal storage expansion (hmm, need to build your own NAS or VSAN or storage server appliance? ;) ).

Via Amazon Molex power connector

Note that you will also need to get a Molex power connector to go from the back of the drive enclosure to an available power port such as for expansion DVD/CD that you can find at a Radio Shack, Fry’s or many other venues for couple of dollars. Double check your specific system and cable connector leads to verify what you will need.

How is it working and performing

So far so good, in addition to using it for some initial calibration and validation activities, the D660 is performing very well and no buyers remorse. Ok, sure, would like more PCIe Gen 3 x4/x8/x16 or an extra on-board Ethernet, however all the other benefits have outweighed those pitfalls.

Speaking of which, if you think a SSD (or other fast storage device) is fast on a 6Gbps SAS or PCIe Gen 2 interface for physical or virtual servers, wait until you experience those IOPs or latencies at 12Gbps SAS and PCIe Gen 3 with a faster current generation Intel processor, just saying ;)…

Server and Storge I/O IOPS and vmware   

In the above chart (slide scroll bar to view more to the right) a Windows 7 64 bit systems (VMs configured with 14GB DRAM) on VMware vSphere V5.5.1 is shown running on different hardware configurations. The Windows system is running Futuremark PCMark 7 Pro (v1.0.4). From left to right the Windows VM on the Dell Inspiron 660 with 16GB physical DRAM using a SSHD (Solid State Hybrid Drive). Second from the left shows results running on a Dell T310 with an Intel X3470 processor also on a SSHD. Middle is the workload on the Dell 660 running on a HHDD, second from right is the workload on the Dell T310 also on a HHDD, while on the right is the same workload on an HP DCS5800 with an Intel E8400. The workload results show a composite score, system storage, simulating user productivity, lightweight processing, and compute intensive tasks.

Futuremark PCMark Windows benchmark
Futuremark PCMark

Don’t forget about the KVM (Keyboard Video Mouse)

Mention KVM to many people in and around the server, storage and virtualization world and they think KVM as in the hypervisor, however to others it means Key board, Video and Mouse aka the other KVM. As part of my recent and ongoing upgrades, it was also time to upgrade from the older smaller KVM’s to a larger, easier to use model. The benefit, support growth while also being easier to work with. Having done some research on various options that also varied in price, I settled in on the StarTech shown below.

Via Amazon.com StarTech 8 port KVM
Via Amazon.com StarTech 8 Port 1U USB KVM Switch

What’s cool about the above 8 port StarTech KVM switch is that it comes with 8 cables (there are 8 ports) that on one end look like a regular VGA monitor screen cable connector. However on the other end that attached to your computer, there is the standard VGA connection that attached to your video out, and a short USB tail cable that attached to an available USB port for Keyboard and Mouse. Needless to say it helps to cut down on the cable clutter while coming in around $38.00 USD per server port being managed, or about a dollar a month over a little over three years.

Word of caution on make and models

Be advised that there are various makes and models of the Dell Inspiron available that differ in the processor generation and thus feature set included. Pay attention to which make or model you are looking at as the prices can vary, hence double-check the processor make and model and then visit the Intel site to see if it is what you are expecting. For example I double checked that the processor for the different models I looked at were i5-3330 (view Intel specifications for that processor here).

Summary

Thanks to Robert Novak (aka @gallifreyan) for taking some time providing useful tips and ideas to help think outside the box for this, as well as some future enhancements to my server and StorageIO lab environment.

Consequently while the Dell Inspiron D600-i660 was not the server that I wanted, it has turned out to be the system that I need now and hence IMHO a diamond in the rough, if you get the right make and mode.

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2013 StorageIO and UnlimitedIO All Rights Reserved

EMC New VNX MCx doing more storage I/O work vs. just being more

Storage I/O trends

It’s not how much you have, its how storage I/O work gets done that matters

Following last weeks VMworld event in San Francisco where among other announcements including this one around Virtual SAN (VSAN) along with Software Defined Storage (SDS), EMC today made several announcements.

Today’s EMC announcements include:

  • The new VNX MCx (Multi Core optimized) family of storage systems
  • VSPEX proven infrastructure portfolio enhancements
  • Availability of ViPR Software Defined Storage (SDS) platform (read more from earlier posts here, here and here)
  • Statement of direction preview of Project Nile for elastic cloud storage platform
  • XtremSW server cache software version 2.0 with enhanced management and support for VMware, AIX and Oracle RAC

EMC ViPREMC XtremSW cache software

Summary of the new EMC VNX MCx storage systems include:

  • More processor cores, PCIe Gen 3 (faster bus), front-end and back-end IO ports, DRAM and flash cache (as well as drives)
  • More 6Gb/s SAS back-end ports to use more storage devices (SAS and SATA flash SSD, fast HDD and high-capacity HDD)
  • MCx – Multi-core optimized with software rewritten to make use of threads and resources vs. simply using more sockets and cores at higher clock rates
  • Data Footprint Reduction (DFR) capabilities including block compression and dedupe, file dedupe and thin provisioning
  • Virtual storage pools that include flash SSD, fast HDD and high-capacity HDD
  • Block (iSCSI, FC and FCoE) and NAS file (NFS, pNFS, CIFS) front-end access with object access via Atmos Virtual Edition (VE) and ViPR
  • Entry level pricing starting at below $10,000 USD

EMC VNX MCx systems

What is this MCx stuff, is it just more hardware?

While there is more hardware that can be used in different configurations, the key or core (pun intended) around MCx is that EMC has taken the time and invested in reworking the internal software of the VNX that has its roots going back to the Data General CLARRiON EMC acquired. This is similar to an effort EMC made a few years back when it overhauled what is now known as the VMAX from the Symmetric into the DMX. That effort expanded from a platform or processor port to re-architecting and software optimizing (rewrite portions) to leverage new and emerging hardware capabilities more effectively.

EMC VNX MCx

With MCx EMC is doing something similar in that core portions of the VNX software have been re-architected and written to take advantage of more threads and cores being available to do work more effectively. This is not all that different from what occurs (or should) with upper level applications that eventually get rewritten to leverage underlying new capabilities to do more work faster and leverage technologies in a more cost-effective way. MCx also leverages flash as a primary medium with data than being moved (256MB chunks) down into lower tiers of storage (SSD and HDD drives).

Storage I/O trends

ENC VNX has had in the past FLASH Cache which enables SSD drives to be used as an extension of main cache as well as using drive targets. Thus while MCx can and does leverage more and faster core as would most any software, it is also able to leverage those cores and threads in a more effective way. After all, it’s not just how many processors, sockets, cores, threads, L1/L2 cache, DRAM, flash SSD and other resources, its how effective you use them. Also keep in mind that a bit of flash in the right place used effectively can go a long way vs. having a lot of cache in the wrong place or not used optimally that will end up costing a lot of cash.

Moving forward this means that EMC should be able to further refine and optimize other portions of the VNX software not yet updated to make further benefit of new hardware platforms and capabilities.

Does this mean EMC is catching up with newer vendors?

Similar to more of something is not always better, its how those items are used that matters, just because something is new does not mean its better or faster. That will manifest itself when they are demonstrated and performance results shown. However key is showing the performance across different workloads that have relevance to your needs and that convey metrics that matter with context.

Storage I/O trends

Context matters including type and size of work being done, number of transactions, IOPs, files or videos served, pages processed or items rendered per unit of time, or response time and latency (aka wait or think time), along with others. Thus some newer systems may be faster on paper, powerpoint, WebEx, You tube or via some benchmarks, however what is the context and how do they compare to others on an apples to apples basis.

What are some other enhancements or features?

Leveraging of FAST VP (Fully Automated Storage Tiering for Virtual Pools) with improved MCx software

Increases the effectiveness of available hardware resources (processors, cores, DRAM, flash, drives, ports)

Active active LUNs accessible by both controllers as well as legacy AULA support

Data sheets and other material for the new VNX MCx storage systems can be found here, with software options and bundles here, and general speeds and feeds here.

Learn more here at the EMC VNX MCx storage system landing page and compare VNX systems here.

What does then new VNX MCx family look like?

EMC VNX MCx family image

Is VNX MCx all about supporting VMware?

Interesting that if you read behind the lines, listen closely to the conversations, ask the right questions you will realize that while VMware is an important workload or environment to support, it is not the only one targeted for VNX. Likewise if you listen and look beyond what is normally amplified in various conversations you will find that systems such as VNX are being deployed as back-end storage in cloud (public, private, hybrid) environments for use with technologies such as OpenStack or object based solutions (visit www.objectstoragecenter.com for more on object storage systems and access)..

There is a common myth that the cloud and service providers all use white box commodity hardware including JBOD for their systems which some do, however some are also using systems such as VNX among others. In some of these scenarios the VNX type systems are or will be deployed in large numbers essentially consolidating the functions of what had been done by even larger number of JBOD based systems. This is where some of you will have a DejaVu or back to the future moment from the mid 90s when there was an industry movement to combine all the DAS and JBOD into larger storage systems. Don’t worry if you are not yet reading about this trend in your favorite industry rag or analyst briefing notes, however ask or look around and you might be surprised at what is occurring, granted it might be another year or two before you read about it (just saying ;).

Storage I/O trends

What that means is that VNX MCx is also well positioned for working with ViPR or Atmos Virtual Edition among other cloud and object storage stacks. VNX MCx is also well positioned for its new low-cost of entry for general purpose workloads and applications ranging from file sharing, email, web, database along with demanding high performance, low latency with large amounts of flash SSD. In addition to being used for general purpose storage, VNX MCx will also complement data protection solutions for backup/restore, BC, DR and archiving such as Data Domain, Avamar and Networker among others. Speaking of server virtualization, EMC also has tools for working with Hyper-V, Xen and KVM in addition to VMware.

If there is an all flash VNX MCx doesn’t that compete with XtremIO?

Yes there are all flash VNX MCx just as there have been all flash VNX before, however these will be positioned for different use case scenarios by EMC and their partners to avoid competing head to head with XtremIO. Thus EMC will need to be diligent in being very clear to its own sales and marketing forces as well as those of partners and customers of what to use when, where, why and how.

General thoughts and closing comments

The VNX MCx is a good set of enhancements by EMC and an example of how it’s not as important of how more you have, rather how you can use it to be more effective.

Ok, nuff said (fow now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

HP Moonshot 1500 software defined capable compute servers

Storage I/O cloud virtual and big data perspectives

Riding the current software defined data center (SDC) wave being led by the likes of VMware and software defined networking (SDN) also championed by VMware via their acquisition of Nicira last year, Software Defined Marketing (SDM) is in full force. HP being a player in providing the core building blocks for traditional little data and big data, along with physical, virtual, converged, cloud and software defined has announced a new compute, processor or server platform called the Moonshot 1500.

HP Moonshot software defined server image

Software defined marketing aside, there are some real and interesting things from a technology standpoint that HP is doing with the Moonshot 1500 along with other vendors who are offering micro server based solutions.

First, for those who see server (processor and compute) improvements as being more and faster cores (and threads) per socket, along with extra memory, not to mention 10GbE or 40GbE networking and PCIe expansion or IO connectivity, hang on to your hats.

HP Moonshot software defined server image individual server blade

Moonshot is in the model of the micro servers or micro blades such as what HP has offered in the past along with the likes of Dell and Sea Micro (now part of AMD). The micro servers are almost the opposite of the configuration found on regular servers or blades where the focus is putting more ability on a motherboard or blade.

With micro servers the approach support those applications and environments that do not need lots of CPU processing capability, large amount of storage or IO or memory. These include some web hosting or cloud application environments that can leverage more smaller, lower power, less performance or resource intensive platforms. For example big data (or little data) applications whose software or tools benefit from many low-cost, low power, and lower performance with distributed, clustered, grid, RAIN or ring based architectures can benefit from this type of solution.

HP Moonshot software defined server image and components

What is the Moonshot 1500 system?

  • 4.3U high rack mount chassis that holds up to 45 micro servers
  • Each hot-swap micro server is its own self-contained module similar to blade server
  • Server modules install vertically from the top into the chassis similar to some high-density storage enclosures
  • Compute or processors are Intel Atom S1260 2.0GHz based processors with 1 MB of cache memory
  • Single S0-DIMM slot (unbuffered ECC at 1333 MHz) supports 8GB (1 x 8GB DIMM) DRAM
  • Each server module has a single 2.5″ SATA 200GB SSD, 500GB or 1TB HDD onboard
  • A dual port Broadcom 5720 1 Gb Ethernet LAn per server module that connects to chassis switches
  • Marvel 9125 storage controller integrated onboard each server module
  • Chassis and enclosure management along with ACPI 2.0b, SMBIOS 2.6.1 and PXE support
  • A pair of Ethernet switches each give up to six x 10GbE uplinks for the Moonshot chassis
  • Dual RJ-45 connectors for iLO chassis management are also included
  • Status LEDs on the front of each chassis providers status of the servers and network switches
  • Support for Canonical Ubuntu 12.04, RHEL 6.4, SUSE Linux LES 11 SP2

Storage I/O cloud virtual and big data perspectives

Notice a common theme with moonshot along with other micro server-based systems and architectures?

If not, it is simple, I mean literally simple and flexible is the value proposition.

Simple is the theme (with software defined for marketing) along with low-cost, lower energy power demand, lower performance, less of what is not needed to remove cost.

Granted not all applications will be a good fit for micro servers (excuse me, software defined servers) as some will need the more robust resources of traditional servers. With solutions such as HP Moonshot, system architects and designers have more options available to them as to what resources or solution options to use. For example, a cloud or object storage system based solutions that does not need a lot of processing performance per node or memory, and a low amount of storage per node might find this as an interesting option for mid to entry-level needs.

Will HP release a version of their Lefthand or IBRIX (both since renamed) based storage management software on these systems for some market or application needs?

How about deploying NoSQL type tools including Cassandra or Mongo, how about CloudStack, OpenStack Swift, Basho Riak (or Riak CS) or other software including object storage, on these types of solutions, or web servers and other applications that do not need the fastest processors or most memory per node?

Thus micro server-based solutions such as Moonshot enable return on innovation (the new ROI) by enabling customers to leverage the right tool (e.g. hard product) to create their soft product allowing their users or customers to in turn innovate in a cost-effective way.

Will the Moonshot servers be the software defined turnaround for HP, click here to see what Bloomberg has to say, or Forbes here.

Learn more about Moonshot servers at HP here, here or data sheets found here.

Btw, HP claims that this is the industries first software defined server, hmm.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

VCE revisited, now & zen

StorageIO Industry trends and perspectives image

Yesterday VCE and their proud parents announced revenues had reached an annual run rate of a billion dollars. Today VCE announced some new products along with enhancements to others.

Before going forward though, lets take go back for a moment to help set the stage to see where things might be going in the future. A little over a three years ago, back in November 2009 VCE was born and initially named ACADIA by its proud parents (Cisco, EMC, Intel and VMware). Here is a post that I did back then.

Btw the reference to Zen might cause some to think that I don’t how to properly refer to the Xen hypervisor. It is really a play from Robert Plants album Now & Zen and its song Tall Cool One. For those not familiar, click on the link and listen (some will have DejaVu, others might think its new and cool) as it takes a look back as well as present, similar to VCE.

Robert plant now & zen vs. Xen hypervisor

On the other hand, this might prompt the question of when will Xen be available on a Vblock? For that I defer you to VCE CTO Trey Layton (@treylayton).

VCE stands for Virtual Computing Environment and was launched as a joint initiative including products and a company (since renamed from Acadia to VCE) to bring all the pieces together. As a company, VCE is based in Plano (Richardson) Texas just north of downtown Dallas and down the road from EDS or what is now left of it after the HP acquisition  The primary product of VCE has been the Vblock. The Vblock is a converged solution comprising components from their parents such as VMware virtualization and management software tools, Cisco servers, EMC storage and software tools and Intel processors.

Not surprisingly there are many ex-EDS personal at VCE along with some Cisco, EMC, VMware and many other people from other organizations in Plano as well as other cites. Also interesting to note that unlike other youngsters that grow up and stay in touch with their parents via technology or social media tools, VCE is also more than a few miles (try hundreds to thousands) from the proud parent headquarters on the San Jose California and Boston areas.

As part of a momentum update, VCE and their parents (Cisco, EMC, VMware and Intel) announced annual revenue run rate of a billion dollars in just three years. In addition the proud parents and VCE announced that they have over 1,000 revenue shipped and installed Vblock systems (also here) based on Cisco compute servers, and EMC storage solutions.

The VCE announcement consists of:

  • SAP HANA database application optimized Vblocks (two modes, 4 node and 8 node)
  • VCE Vision management tools and middleware or what I have refered to as Valueware
  • Entry level Vblock (100 and 200) with Cisco C servers and EMC (VNXe and VNX) storage
  • Performance and functionality enhancements to existing Vblock models 300 and 700
  • Statement of direction for more specialized Vblocks besides SAP HANA


Images courtesy with permission of VCE.com

While VCE is known for their Vblock converged, stack, integrated, data center in a box, private cloud or among other descriptors, there is more to the story. VCE is addressing convergence of common IT building blocks for cloud, virtual, and traditional physical environments. Common core building blocks include servers (compute or processors), networking (IO and connectivity), storage, hardware, software, management tools along with people, processes, metrics, policies and protocols.

Storage I/O image of cloud and virtual IT building blocks

I like the visual image that VCE is using (see below) as it aligns with and has themes common to what I have discussing in the past.


Images courtesy with permission of VCE.com

VCE Vision is software with APIs that collects information about Vblock hardware and software components to give insight to other tools and management frameworks. For example VMware vCenter plug-in and vCenter Operations Manager Adapter which should not be a surprise. Customers will also be able to write to the Vision API to meet their custom needs. Let us watch and see what VCE does to add support for other software and management tools, along with gain support from others.


Images courtesy with permission of VCE.com

Vision is more than just an information source feed for VMware vCenter or VASA or tools and frameworks from others. Vision is software developed by VCE that will enable insight and awareness into the Vblock and applications, however also confirm and give status of physical and logical component configuration. This means the basis for setting up automated or programmatic remediation such as determining what software or firmware to update based on different guidelines.


Images courtesy with permission of VCE.com

Initially VCE Vision provides (information) inventory and perspective of how those components are in compliance with firmware or software releases, so stay tuned. VCE is indicating that Vision will continue to evolve after all this is the V1.0 release with future enhancements targeted towards taking action, controlling or active management.

StorageIO Industry trends and perspectives image

Some trends, thoughts and perspectives

The industry adoption buzz is around software defined X where X can be data center (SDDC), or storage (SDS) or networking (SDN), or marketing (SDM) or other things. The hype and noise around software defined which in the case of some technologies is good. On the marketing hype side, this has led to some Software Defined BS (SDBS).

Thus, it was refreshing at least in the briefing session I was involved in to hear a minimum focus around software defined and more around customer and IT business enablement with technology that is shipping today.

VCE Vision is a good example of adding value hence what I refer to as Valueware around converged components. For those vendors who have similar solutions, I urge them to streamline, simplify and more clearly articulate their value proposition if they have valueware.

Vendors including VCE continue to evolve their platform based converged solutions by adding more valueware, management tools, interfaces, APIs, interoperability and support for more applications. The support for applications is also moving beyond simple line item ordering or part number skews to ease acquisition and purchasing. Some solutions include VCE Vblock, NetApp FlexPod that also uses Cisco compute servers, IBM PureSystems (PureFlex etc) and Dell vStart among others are extending their support and optimization for various software solutions. These software solutions range from SAP (including HANA), Microsoft (Exchange, SQLserver, Sharepoint), Citrix desktop (VDI), Oracle, OpenStack, Hadoop map reduce along with other little-data, big-data and big-bandwidth applications to name a few.

Additional and related reading:
Acadia VCE: VMware + Cisco + EMC = Virtual Computing Environment
Cloud conversations: Public, Private, Hybrid what about Community Clouds?
Cloud, virtualization, Storage I/O trends for 2013 and beyond
Convergence: People, Processes, Policies and Products
Hard product vs. soft product
Hardware, Software, what about Valueware?
Industry adoption vs. industry deployment, is there a difference?
Many faces of storage hypervisor, virtual storage or storage virtualization
The Human Face of Big Data, a Book Review
Why VASA is important to have in your VMware CASA

Congratulations to VCE, along with their proud parents, family, friends and partners, now how long will it take to reach your next billion dollars in annual run rate revenue. Hopefully it wont be three years until the next VCE revisited now and Zen ;).

Disclosure: EMC and Cisco have been StorageIO clients, I am a VMware vExpert that gets me a free beer after I pay for VMworld and Intel has named two of my books listed on their Recommended Reading List for Developers.

Ok, nuff said, time to head off to vBeers over in Minneapolis.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VMAX 10K, looks like high-end storage systems are still alive (part II)

StorageIO industry trends cloud, virtualization and big data

This is the second in a multi-part series of posts (read first post here) looking at if large enterprise and legacy storage systems are dead, along with what todays EMC VMAX 10K updates mean.

Thus on January 14 2013 it is time for a new EMC Virtual Matrix (VMAX) model 10,000 (10K) storage system. EMC has been promoting their January 14 live virtual event for a while now. January significance is that is when (along with May or June) is when many new systems, solutions or upgrades are made on a staggered basis.

Historically speaking, January and February, along with May and June is when you have seen many of the larger announcements from EMC being made. Case in point, back in February of 2012 VFCache was released, then May (2012) in Las Vegas at EMCworld there were 42 announcements made and others later in the year.

Click here to see images of the car stuffing or click here to watch a video.

Let’s not forget back in February of 2012 VFCache was released, and go back to January 2011 there was the record-setting event in New York City complete with 26 people being compressed, deduped, singled instanced, optimized, stacked and tiered into a mini cooper (Coop) automobile (read and view more here).

Now back to the VMAX 10K enhancements

As an example of a company, product family and specific storage system model, still being alive is the VMAX 10K. Although this announcement by EMC is VMAX 10K centric, there is also a new version of the Enginuity software (firmware, storage operating system, valueware) that runs across all VMAX based systems including VMAX 20K and VMAX 40K. Read here, here and here and here to learn more about VMAX and Enginuity systems in general.

Some main themes of this announcement include Tier 1 reliability, availability and serviceability (RAS) storage systems functionality at tier 2 pricing for traditional, virtual and cloud data centers.

Some other themes of this announcement by EMC:

  • Flexible, scalable and resilient with performance to meet dynamic needs
  • Support private, public and hybrid cloud along with federated storage models
  • Simplified decision-making, acquisition, installation and ongoing management
  • Enable traditional, virtual and cloud workloads
  • Complement its siblings VMAX 40K, 20K and SP (Service Provider) models

Note that the VMAX SP is a model configured and optimized for easy self-service and private cloud, storage as a service (SaaS), IT as a Service (ITaaS) and public cloud service providers needing multi-tenant capabilities with service catalogs and associated tools.

So what is new with the VMAX 10K?

It is twice as fast (per EMC performance results) as earlier VMAX 10K by leveraging faster 2.8GHz Intel westmere vs. earlier 2.5GHz westmere processors. In addition to faster cores, there are more, from 4 to 6 on directors, from 8 to 12 on VMAX 10K engines. The PCIe (Gen 2) IO busses remain unchanged as does the RapidIO interconnect.  RapidIO  used for connecting nodes and engines,  while PCIe is used for adapter and device connectivity. Memory stays the same at up to 128GB of global DRAM cache, along with dual virtual matrix interfaces (how the nodes are connected). Note that there is no increase in the amount of DRAM based cache memory in this new VMAX 10K model.

This should prompt the question of for traditional cache centric or dependent for performance storage systems such as VMAX, how much are they now CPU and their associated L1 / L2 cache dependent or effective? Also how much has the Enginuity code under the covers been enhanced to leverage the multiple cores and threads thus shifting from being cache memory dependent processor hungry.

Also new with the updated VMAX 10K include:

  • Support for dense 2.5 inch drives, along with mixed 2.5 inch and 3.5 inch form factor devices with a maximum of 1,560 HDDs. This means support for 2.5 inch 1TB 7,200 RPM SAS HDDs, along with fast SAS HDDs, SLC/MLC and eMLC solid state devices (SSD) also known as electronic flash devices (EFD). Note that with higher density storage configurations, good disk enclosures become more important to counter or prevent the effects of drive vibration, something that leading vendors are paying attention to and so should customers.
  • EMC is also with the VMAX 10K adding support for certain 3rd party racks or cabinets to be used for mounting the product. This means being able to mount the VMAX main system and DAE components into selected cabinets or racks to meet specific customer, colo or other environment needs for increased flexibility.
  • For security, VMAX 10K also supports Data at Rest Encryption or (D@RE) which is implemented within the VMAX platform. All data encrypted on every drive, every drive type (drive independent) within the VMAX platform to avoid performance impacts. AES 256 fixed block encryption with FIPS 140-2 validation (#1610) using embedded or external key management including RSA Key Manager. Note that since the storage system based encryption is done within the VMAX platform or controller, not only is the encrypt / decrypt off-loaded from servers, it also means that any device from SSD to HDD to third-party storage arrays can be encrypted. This is in contrast to drive based approaches such as self encrypting devices (SED) or other full drive encryption approaches. With embedded key management, encryption keys kept and managed within the VMAX system while external mode leverages RSA key management as part of a broader security solution approach.
  • In terms of addressing ease of decision-making and acquisition, EMC has bundled core Enginuity software suite (virtual provisioning, FTS and FLM, DCP (dynamic cache partitioning), host I/O limits, Optimizer/virtual LUN and integrated RecoverPoint splitter). In addition are bundles for optimization (FAST VP, EMC Unisphere for VMAX with heat map and dashboards), availability (TimeFinder for VMAX 10K) and migration (Symmetrix migration suite, Open Replicator, Open Migrator, SRDF/DM, Federated Live Migration). Additional optional software include RecoverPoint CDP, CRR and CLR, Replication Manager, PowerPath, SRDF/S, SRDF/A and SRDF/DM, Storage Configuration Advisor, Open Replicator with Dynamic Mobility and ControlCenter/ProSphere package.

Who needs a VMAX 10K or where can it be used?

As the entry-level model of the VMAX family, certain organizations who are growing and looking for an alternative to traditional mid-range storage systems should be a primary opportunity. Assuming the VMAX 10K can sell at tier-2 prices with a focus of tier-1 reliability, feature functionality, and simplification while allowing their channel partners to make some money, then EMC can have success with this product. The challenge however will be helping their direct and channel partner sales organizations to avoid competing with their own products (e.g. high-end VNX) vs. those of others.

Consolidation of servers with virtualization, along with storage system consolidation to remove complexity in management and costs should be another opportunity with the ability to virtualize third-party storage. I would expect EMC and their channel partners to place the VMAX 10K with its storage virtualization of third-party storage as an alternative to HDS VSP (aka USP/USPV) and the HP XP P9000 (Hitachi based) products, or for block storage needs the NetApp V-Series among others. There could be some scenarios where the VMAX 10K could be positioned as an alternative to the IBM V7000 (SVC based) for virtualizing third-party storage, or for larger environments, some of the software based appliances where there is a scaling with stability (performance, availability, capacity, ease of management, feature functionality) concerns.

Another area where the VMAX 10K could see action which will fly in the face of some industry thinking is for deployment in new and growing managed service providers (MSP), public cloud, and community clouds (private consortiums) looking for an alternative to open source based, or traditional mid-range solutions. Otoh, I cant wait to hear somebody think outside of both the old and new boxes about how a VMAX 10K could be used beyond traditional applications or functionality. For example filling it up with a few SSDs, and then balance with 1TB 2.5 inch SAS HDD and 3.5 inch 3TB (or larger when available) HDDs as an active archive target leveraging the built-in data compression.

How about if EMC were to support cloud optimized HDDs such as the Seagate Constellation Cloud Storage (CS) HDDs that were announced late in 2012 as well as the newer enterprise class HDDs for opening up new markets? Also keep in mind that some of the new 2.5 inch SAS 10,000 (10K) HDDs have the same performance capabilities as traditional 3.5 inch 15,000 (15K) RPM drives in a smaller footprint to help drive and support increased density of performance and capacity with improved energy effectiveness.

How about attaching a VMAX 10K with the right type of cost-effective (aligned to a given scenario) SSD or HDDs or third-party storage to a cluster or grid of servers that are running OpenStack including Swift, CloudStack, Basho Riak CS, Celversafe, Scality, Caringo, Ceph or even EMCs own ATMOS (that supports external storage) for cloud storage or object based storage solutions? Granted that would be thinking outside of the current or new box thinking to move away from RAID based systems in favor or low-cost JBOD storage in servers, however what the heck, let’s think in pragmatic ways.

Will EMC be able to open new markets and opportunities by making the VMAX and its Enginuity software platform and functionality more accessible and affordable leveraging the VMAX 10K as well as the VMAX SP? Time will tell, after all, I recall back in the mid to late 90s, and then again several times during the 2000s similar questions or conversations not to mention the demise of the large traditional storage systems.

Continue reading about what else EMC announced on January 14 2013 in addition to VMAX 10K updates here in the next post in this series. Also check out Chucks EMC blog to see what he has to say.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

SSD past, present and future with Jim Handy

Now also available via

This is a new episode in the continuing StorageIO industry trends and perspectives pod cast series (you can view more episodes or shows along with other audio and video content here) as well as listening via iTunes or via your preferred means using this RSS feed (https://storageio.com/StorageIO_Podcast.xml)

StorageIO industry trends cloud, virtualization and big data

In this episode, I talk with SSD nand flash and DRAM chip analyst Jim Handy of Objective Analysis at the LSI AIS (Accelerating Innovation Summit) 2012 in San Jose. Our conversation includes SSD past, present and future, market and industry trends, who are doing what and things to keep an eye and ear, open for along with server, storage and memory convergence.

Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Jim and myself.

StorageIO podcast

Also available via

Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

Enjoy this episode SSD Past, Present and Future with Jim Handy.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Part IV: PureSystems, something old, something new, something from big blue

This is the fourth in a five-part series around the recent IBM PureSystems announcements. You can view the earlier post here, and the next post here.

So what does this mean for IBM Business Partners (BPs) and ISVs?
What could very well differentiate IBM PureSystems from those of other competitors is to take what their partner NetApp has done with FlexPods combing third-party applications from Microsoft and SAP among others and take it to the next level. Similar to what helped make EMC Centera a success (or at least sell a lot of them) was inclusion and leveraging third-party ISVs and BPs  to add value. Compared to other vendors with object based or content accessible storage (CAS) or online archive platforms that focused on the technology feature, function speeds and feeds, EMC realized the key was getting ISVs to support so that BPs and their own direct sales force could sell the solution.

With PureSystems, IBM is revisiting what they have done in the past which if offer bundled solutions providing incentives for ISVs to support and BPs to sell the IBM brand solution. EMC took an early step with including VMware with their Vblock combing server, storage, networking and software with NetApp taking the next step adding SAP, Microsoft and other applications. Dell, HP, Oracle and others are following suit so it only makes sense that IBM returns to its roots leveraging its DNA to reach out and get their ISVs who are now, have been in the past, or are new opportunities to be on board.

IBM is throwing its resources including their innovation centers for training around the world where business partners can get the knowledge and technical support they need. In other words, workshops or seminars on how to sell deploy and setting up of these systems, application and customer testing or proof of concepts and things one would expect out of IBM for such an initiative. In addition to technology and sales training along with marketing support, IBM is making their financing capabilities available to help customers as well as offer incentives to their business partners to simplify acquisitions.

So what buzzword bingo topics and themes did IBM address with this announcement:
IBM did a fantastic job in terms of knocking the ball out of the park with this announcement pertaining buzzword bingo and deserves an atta boy or atta girl!

So what about how this will affect sales of Bladecenters  or other systems?
If all IBM and their BPs do are, encroach on existing systems sales to circle the wagons and protect the installed base, which would be one thing. However if IBM and their BPs can use the new packaging and model approach to reestablish customers and partnerships, or open and expand into new adjacent markets, then the net differences should be more Bladecenters (excuse me, PureFlex) being sold.

So what will this cost?
IBM is citing entry PureSystems Express models starting at around $100,000 USD for base systems with others starting at around $200,000 and $300,000 expandable into larger configurations and budgets. Note that like airlines that advertise a low airfare and then you get to pay extra for peanuts, drinks, extra bag space, changes to reservations and so forth, look at these and related systems not just for the first starting price, also for expansion costs over different time periods. Contact IBM, your BP or ISV to find out what one of these systems will do for and cost you.

So what about VARs and IBM business partners (BPs)?
This could be a boon for those BPs and ISVs  that had previously sold their software solutions bundled with IBM hardware platforms who were being challenged by other converged solution stacks or were being forced to unbundled. This will also allow those business partners to compete on par with other converged solutions or continue selling the pieces of what they are familiar with however under a new umbrellas. Of course, pricing will be a focus and concern for some who will want to see what added value exists vs. acquiring the various components. This also means that IBM will have to make incentives available for their partners to make a living while also allowing their customers to afford solutions and maximize their return on innovation (the new ROI) and enablement.

Click here to view the next post in this series, ok nuff said for now.

Here are some links to learn more:
Various IBM Redbooks and related content
The blame game: Does cloud storage result in data loss?
What do you need when its time to buy a new server?
2012 industry trends perspectives and commentary (predictions)
Convergence: People, Processes, Policies and Products
Buzzword Bingo and Acronym Update V2.011
The function of XaaS(X) Pick a letter
Hard product vs. soft product
Buzzword Bingo and Acronym Update V2.011
Part I: PureSystems, something old, something new, something from big blue
Part II: PureSystems, something old, something new, something from big blue
Part III: PureSystems, something old, something new, something from big blue
Part IV: PureSystems, something old, something new, something from big blue
Part V: PureSystems, something old, something new, something from big blue
Cloud and Virtual Data Storage Networking

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Part V: PureSystems, something old, something new, something from big blue

This is the fifth in a five-part series around the recent IBM PureSystems announcements. You can view the earlier post here.

So what about vendor or technology lock in?
So who is responsible for vendor or technology lock in? When I was working in IT organizations, (e.g. what vendors call the customer) the thinking was vendors are responsible for lock in. Later when I worked for different vendors (manufactures and VARs) the thinking was lock in is what was caused by the competition. More recently I’m of the mind set that vendor lock in is a shared responsibility issue and topic. I’m sure some marketing wiz or sales type will be happy to explain the subtle differences of how their solution does not cause lock in.

Vendor lock in can be a shared responsibility. Generally speaking, lock in, stickiness and account control are essentially the same, or at least strive to get similar results. For example, vendor lock in too some has a negative stigma. However vendor stickiness may be a new term, perhaps even sounding cool thus it is not a concern. Remember the Mary Poppins song a spoon full of sugar makes the medicine go down? In other words, sometimes changing and using a different term such as sticky vs. vendor lock in helps make the situation taste better.

So what should you do?
Take a closer look if you are considering converged infrastructures, cloud or data centers in a box, turnkey application or information services deployment platforms. Likewise, if you are looking at specific technologies such as those from Cisco UCS, Dell vStart, EMC Vblock (or via VCE), HP, NetApp FlexPod or Oracle (ExaLogic, ExaData, etc) among others, also check out the IBM PureSystems (Flex and PureApplication). Compare and contrast these converged solutions with your traditional procurement and deployment modes including cost of acquiring hardware, software, ongoing maintenance or service fees along with value or benefit of bundled tools. There may be a higher cost for converged systems in some scenarios, however compare on the value and benefit derived vs. doing the integration yourself.

Compare and contrast how converged solutions enable, however also consider what constraints exists in terms of flexibility to reconfigure in the future or make other changes. For example as part of integration, does a solution take a lowest common denominator approach to software and firmware revisions for compatibility that may lag behind what you can apply to standalone components. Also, compare and contrast various reference architectures with different solution bundles or packages.

Most importantly compare and evaluate the solutions on their ability to meet and exceed your base requirements while adding value and enabling return on innovation while also being cost-effective. Do not be scared of these bundled solutions; however do your homework to make informed decisions including overcoming any concerns of lock in or future costs and fees. While these types of solutions are cool or interesting from a technology perspective and can streamline acquisition and deployment, make sure that there is a business benefit that can be addressed as well as enablement of new capabilities.

So what does this all mean?
Congratulations to IBM with their PureSystems for leveraging their DNA and roots bundling what had been unbundled before cloud and stacks were popular and trendy. IBM has done a good job of talking vision and strategy along lines of converged and dynamic, elastic and smart, clouds and other themes for past couple of years while selling the pieces as parts of solutions or ala carte or packaged by their ISVs and business partners.

What will be interesting to see is if bladecenter customers shift to buying PureFlex, which should be an immediate boost to give proof points of adoption, while essentially up selling what was previously available. However, more interesting will be to see if net overall new customers and footprints are sold as opposed to simply selling a newer and enhanced version of previous components.

In other words will IBM be able to keep up their focus and execution where they have sold the previous available components, while also holding onto current ISV and BP footprint sales and perhaps enabling those partners to recapture some hardware and solution sales that had been unbundled (e.g. ISV software sold separate of IBM platforms) and move into new adjacent markets.

Here are some links to learn more:
Various IBM Redbooks and related content
The blame game: Does cloud storage result in data loss?
What do you need when its time to buy a new server?
2012 industry trends perspectives and commentary (predictions)
Convergence: People, Processes, Policies and Products
Buzzword Bingo and Acronym Update V2.011
The function of XaaS(X) Pick a letter
Hard product vs. soft product
Buzzword Bingo and Acronym Update V2.011
Part I: PureSystems, something old, something new, something from big blue
Part II: PureSystems, something old, something new, something from big blue
Part III: PureSystems, something old, something new, something from big blue
Part IV: PureSystems, something old, something new, something from big blue
Part V: PureSystems, something old, something new, something from big blue
Cloud and Virtual Data Storage Networking

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Here are some links to learn more:
Various IBM Redbooks and related content
The blame game: Does cloud storage result in data loss?
What do you need when its time to buy a new server?
2012 industry trends perspectives and commentary (predictions)
Convergence: People, Processes, Policies and Products
Buzzword Bingo and Acronym Update V2.011
The function of XaaS(X) – Pick a letter
Hard product vs. soft product
Buzzword Bingo and Acronym Update V2.011
Part I: PureSystems, something old, something new, something from big blue
Part II: PureSystems, something old, something new, something from big blue
Part III: PureSystems, something old, something new, something from big blue
Part IV: PureSystems, something old, something new, something from big blue
Part V: PureSystems, something old, something new, something from big blue
Cloud and Virtual Data Storage Networking

Ok, so what is next, lets see how this unfolds for IBM and their partners.

Nuff said for now.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Part III: PureSystems, something old, something new, something from big blue

This is the third in a five-part series around the recent IBM PureSystems announcements. You can view the earlier post here, and the next post here.

So what about the IBM Virtual Appliance Factory?
Where PureFlex and PureApplication (PureSystems) are the platforms or vehicles for enabling your journey to efficient and effective information services delivery, and PureSystem centre (or center for those of you in the US) is the portal or information center, the IBM Virtual Appliance Factory (VAF) is a collection of tools, technologies, processes and methodologies. The VAF  helps developers or ISVs to prepackage applications or solutions for deployment into Kernel Virtual Machine (KVM) on Intel and IBM PowerVM  virtualized environments that are also supported by PureFlex and PureApplication  systems.

VAF technologies include Distributed Management Task Force (DMTF) Open Virtual Alliance (OVA) Open Virtualization Format (OVF) along with other tools for combing operating systems (OS), middleware and solution software into a delivery package or a virtual appliance that can be deployed into cloud and virtualized environments. Benefits include reducing complexity of working logical partions (LPAR) and VM configuration, abstraction and portability for deployment or movement from private to public environments. Net result should be less complexity lowering costs while reducing mean time to install and deploy. Here is a link to learn more about VAF and its capabilities and how to get started.

So what does cloud ready mean?
IBM is touting cloud ready capability in the context of rapid out of the box, ease of deployment and use as well as easy to acquire. This is in line with what others are doing with converged server, storage, networking, hardware, software and hypervisor solutions. IBM is also touting that they are using the same public available products as what they use in their own public services SmartCloud offerings.

So what is scale in vs. scale up, scale out or scale within?
Traditional thinking is that scaling refers to increasing capacity. Scaling also means increasing performance, availability, functionality with stability. Scaling with stability means that as performance, availability, capacity or other features are increased problems are not introduced or complexity is not increased. For example, scaling with stability for performance should not result in loss of availability or capacity, capacity increase should not be at the cost of performance or availability, should not cost performance or capacity and management tools should work for you, instead of you working for them.

Scaling up and scaling out have been used to describe scaling performance, availability, capacity and other attributes beyond the limits of a single system, box or cabinet. For example clustered, cloud, grid and other approaches refer to scaling out or horizontally across different physical resources. Scaling up or scaling vertically means scaling within in a system using faster, denser technologies doing more in the same footprint. HDS announced a while back what they refer to 3D scaling which embraces the above notions of scaling up, out and within across different dimensions. IBM is building on that by emphasizing scaling leveraging faster, denser components such as Power7 and Intel processors to scale within the box or system or node, which can also be scaled out using enhanced networking from IBM and their partners.

So what about backup/restore, BC, DR and general data protection?
I would expect IBM to step up and talk about how they can leverage their data protection and associated management toolsets, technologies and products. IBM has the components (hardware, software) already for backup/restore, BC, DR, data protection and security along with associated service offerings. One would expect IBM to not only come out with a backup, restore, BC, DR and archiving solution or version, as well as ones for archiving or data preservation, compliance appliance variants as well as related themes. We know that IBM has the pieces, people, process and practices, let us see if IBM has learned from their competitors who may have missed data protection messaging opportunities. Sometimes what is assumed to be understood does not get discussed, however often what is assumed and is not understood should be discussed, hence, let us see if IBM does more than say oh yes, we have those capabilities and products too.

So what do these have compared to others who are doing similar things?
Different vendors have taken various approaches for bringing converged products or solutions to the market place. Not surprising, storage centric vendors EMC and NetApp have partnered with Cisco for servers (compute). Where Cisco was known for networking having more recently moved into compute servers, EMC and NetApp are known for storage and moving into converged space with servers. Since EMC and NetApp often compete with storage solutions offerings from traditional server vendors Dell, HP, IBM and Oracle among others, and now Cisco is also competing with those same server vendors it has previously partnered with for networking thus it makes sense for Cisco, EMC and NetApp to partner.

While EMC owns a large share of VMware, they do also support Microsoft and other partners including Citrix. NetApp followed EMC into the converged space partnering with Cisco for compute and networking adding their own storage along with supporting hypervisors from Citrix, Microsoft and VMware along with third-party ISVs including Microsoft and SAP among others. Dell has evolved from reference architectures to products called vStart that leverage their own technologies along with those of partners.

A challenge for Dell however is that vStart  sounds more like a service offering as opposed to a product that they or their VARs and business partners can sell and add value around. HP is also in the converged game as is Oracle among others. With PureSystems IBM is building on what their competitors and in some cases partners are doing by adding and messaging more around the many ISVs and applications that are part of the PureSystems initiative. Rest assured, there is more to PureSystems than simply some new marketing, press releases, videos and talking about partners and ISVs. The following table provides a basic high level comparison of what different vendors are doing or working towards and is not intended to be a comprehensive review.

Who

What

Server

Storage

Network

Software

Other comments

Cisco

UCS

Cisco

Partner

Cisco

Cisco and Partners

Various hypervisors and OS

Dell

vStart

Dell

Dell

Dell and Partners

Dell and partners

Various hypervisors, OS and bundles

EMC
VCE

Vblock VSPEX

Cisco

EMC

Cisco and partners

EMC, Cisco and partners

Various hypervisors, OS and bundles, VSPEX adds more partner solution bundles

HP

Converged

HP

HP

HP and partners

HP and partners

Various hypervisors, OS and bundles

IBM

PureFlex

IBM

IBM

IBM and partners

IBM and partners

Various hypervisors, OS and bundles adding more ISV partners

NetApp

FlexPod

Cisco

NetApp

Cisco and partners

NetApp, Cisco and partners

Various hypervisors, OS and bundles for SAP, Microsoft among others

Oracle

ExaLogic (Exadata  database)

Oracle

Oracle

Partners

Oracle and partners

Various Oracle software tools and technologies

So what took IBM so long compared to others?
Good question, what is the saying? Rome was not built-in a day!

Click here to view the next post in this series, ok, nuff said for now.

Here are some links to learn more:
Various IBM Redbooks and related content
The blame game: Does cloud storage result in data loss?
What do you need when its time to buy a new server?
2012 industry trends perspectives and commentary (predictions)
Convergence: People, Processes, Policies and Products
Buzzword Bingo and Acronym Update V2.011
The function of XaaS(X) Pick a letter
Hard product vs. soft product
Buzzword Bingo and Acronym Update V2.011
Part I: PureSystems, something old, something new, something from big blue
Part II: PureSystems, something old, something new, something from big blue
Part III: PureSystems, something old, something new, something from big blue
Part IV: PureSystems, something old, something new, something from big blue
Part V: PureSystems, something old, something new, something from big blue
Cloud and Virtual Data Storage Networking

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Part II: PureSystems, something old, something new, something from big blue

This is the second in a five-part series around the recent IBM PureSystems announcements. You can view the earlier post here, and the next post here.

So what are the speeds and feeds of a PureFlex system?
The components that make up the PureFlex line include:

  • IBM management node (server with management software tools).
  • 10Gb Ethernet (LAN) switch, adapters and associated cabling.
  • IBM V7000 virtual storage (also see here and here).
  • Dual 8GFC (8Gb Fibre Channel) SAN switches and adapters.
  • Servers with either x86 xSeries using for example Intel Sandy Bridge EP 2.6 GHz 8 core processors, or IBMs Power7 based pSeries for AIX. Note that IBM with their blade center systems (now rebadged as part of being PureSystems) support various IO and networking interfaces include SAS, Ethernet, Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and InfiniBand using adapters and switches from various partners.
  • Virtual machine (VM) hypervisors such as Microsoft Hyper V and VMware vSphere/ESX among others. In addition to x86 based hypervisors or kernel virtual machines (KVM), IBM also supports its own virtual technology found in Power7 based systems. Check IBM support matrix for specific configurations and current offerings.
  • Optional middleware such as IBM WebSphere.

Read more speeds and feeds at the various IBM sites including on Tony Pearson’s blog site.

So what is IBM PureApplication System?
This builds off and on PureFlex systems as a foundation for deploying various software stacks to deliver traditional IT applications or cloud Platform as a Service (PaaS) or Software as a Service (SaaS) and Application as a Service (AaaS) models. For example cloud or web stacks, java, database, analytics or other applications with buzzwords of elastic, scalable, repeatable, self-service, rapid provisioning, resilient, multi tenant and secure among others. Note that if are playing or into Buzzword bingo, go ahead and say Bingo when you are ready as IBM has a winner in this category.

So what is the difference between PureFlex and PureApplication systems?
PureApplication systems leverage PureFlex technologies adding extra tools and functionality for cloud like application functionality delivery.

So what is IBM PureSystems Centre?
It is a portal or central place where IBM and their business partner solutions pertaining to PureApplication and PureFlex systems can be accessed for including information for first installation support along with maintenance and upgrades. At launch, IBM is touting more than 150 solutions or applications that are available or qualified for deployment on PureApplication and PureFlex systems. In addition, IBM Patterns (aka templates) can also be accessed via this venue. Examples of application or independent software vendor (ISV) developed solutions for banking, education, financial, government, healthcare and insurance can be found at the PureSystems Centre portal (here, here and here).

So what part of this is a service and what is a product?
Other than the PureSystem center, which is a web portal for accessing information and technologies, PureFlex and PureApplication along with Virtual Appliance Factory are products or solutions that can be bought from IBM or their business partners. In addition, IBM business partners or third parties can also use these solutions housed in their own, a customer, or third-party facility for delivering managed service provided (MSP) capabilities, along with other PaaS and SaaS or AaaS type functionalities. In other words, these solutions can be bought or leased by IT and other organizations for their own use in a traditional IT deployment model, private, hybrid or public cloud model.

Another option is for service providers to acquire these solutions for use in developing and delivering their own public and private or hybrid services. IBM is providing the hard product (hardware and software) that enables your return on innovation (the new ROI) to create and deliver your own soft product (services and experiences) consumed by those who use those capabilities. In addition to traditional financial quantitative return on investment (traditional ROI) and total cost of ownership (TCO), the new ROI complements those by adding a qualitative aspect. Your return on innovation will be dependent on what you are capable of doing that enables your customers or clients to be productive or creative. For example enabling your customers or clients to boost productivity, remove complexity and cost while maintaining or enhancing Quality of Service (QoS), service level objectives (SLOs) and service level agreements (SLAs) in addition to supporting growth by using a given set of hard products. Thus, your soft product is a function of your return on innovation and vise versa.

Note that in this context, not to be confused with hardware and software, hard product are those technologies including hardware, software and services that are obtained and deployed as a soft product. A soft product in this context does not refer to software, rather the combination of hard products plus your own developed or separately obtained software and tools along with best practices and usage models. Thus, two organizations can use the same hard products and deliver separate soft products with different attributes and characteristics including cost, flexibility and customer experience.

So what is a Pattern of Expertise?
Combines operational know how experience and knowledge about common infrastructure resource management (IRM), data center infrastructure management (DCIM) and other commonly repeatable related process, practices and workflows including provisioning. Common patterns of activity and expertise for routine or other time-consuming tasks, which some might refer to as templates or workflows enable policy driven based automation. For example, IBM cites recurring time-consuming tasks that lend themselves to being automated such as provisioning, configuration, and upgrades and associated IRM, DCIM and data protection, storage and application management activities. Automation software tools are included as part of the PureSystems with patterns being downloadable as packages for common tasks and applications found at the IBM PureSystem center.

At announcement, there are three types or categories of patterns:

  • IBM patterns: Factory created and supplied with the systems based on experiences IBM has derived from various managers, engineers and technologist for automating common tasks including configuration, deployment and application upgrades and maintenance. The aim is to cut the amount of time and intervention for deployment of applications and other common functions enabling IT staff to be more productive and address other needs.
  • ISV patterns: These leverage experience and knowledge from ISVs partnered with IBM, which at time of launch numbers over 125 vendors offering certified PureSystems Ready applications. The benefit and objective are to cut the time and complexity associated with procuring (e.g. purchasing), deploying and managing third-party ISV software. Downloadable patterns packages can be found at the IBM PureSystem center.
  • Customer patterns: Enables customers to collect and package their own knowledge, processes, rules, policies and best practices into patterns for automation. In addition to collecting knowledge for acquisition, configuration, day to day management and troubleshooting, these patterns can facility automation of tasks to ease on boarding of new staff employees or contractors. In addition, these patterns or templates capture workflows for automation enabling shorter deployment times of systems and applications into locations where skill sets do not exist.

Here is a link to some additional information about patterns on the IBM developerWorks site.

Click here to view the next post in this series, ok, nuff said for now.

Here are some links to learn more:
Various IBM Redbooks and related content
The blame game: Does cloud storage result in data loss?
What do you need when its time to buy a new server?
2012 industry trends perspectives and commentary (predictions)
Convergence: People, Processes, Policies and Products
Buzzword Bingo and Acronym Update V2.011
The function of XaaS(X) Pick a letter
Hard product vs. soft product
Buzzword Bingo and Acronym Update V2.011
Part I: PureSystems, something old, something new, something from big blue
Part II: PureSystems, something old, something new, something from big blue
Part III: PureSystems, something old, something new, something from big blue
Part IV: PureSystems, something old, something new, something from big blue
Part V: PureSystems, something old, something new, something from big blue
Cloud and Virtual Data Storage Networking

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved