Introducing Windows Subsystem for Linux WSL Overview #blogtober

Introducing Windows Subsystem for Linux WSL Overview #blogtober

server storage I/O data infrastructure trends

Updated 1/21/2018

Introducing Windows Subsystem for Linux WSL and Overview. Microsoft has been increasing their support of Linux across Azure public cloud, Hyper-V and Linux Integration Services (LIS) and Windows platforms including Windows Subsystem for Linux (WSL) as well as Server along with Docker support.

WSL installed with Ubuntu on Windows 10

WSL with Ubuntu installed and open in a window on one of my Windows 10 systems.

WSL is not a virtual machine (VM) running on Windows or Hyper-V, rather it is a subsystem that coexists next to win32 (read more about how it works and features, enhancements here). Once installed, WSL enables use of Linux bash shell along with familiar tools (find, grep, sed, awk, rsync among others) as well as services such as ssh, MySQL among others.

What this all means is that if you work with both Windows and Linux, you can do so on the same desktop, laptop, server or system using your preferred commands. For example in one window you can be using Powershell or traditional Windows commands and tools, while in another window working with grep, find and other tools eliminating the need to install things such as wingrep among others.

Installing WSL

Depending on which release of Windows desktop or server you are running, there are a couple of different install paths. Since my Windows 10 is the most recent release (e.g. 1709) I was able to simply go to the Microsoft Windows Store via desktop, search for Windows Linux, select the distribution, install and launch. Microsoft has some useful information for installing WSL on different Windows version here, as well as for Windows Servers here.

Get WSL from Windows Store

Get WSL from Windows Store or more information and options here.

Microsoft WSL install

Click on Get the app

Select which Linux for WSL to install

Select desired WSL distribution

SUSE linux for WSL

Lests select SUSE as I already have Ubuntu installed (I have both)

WSL installing SUSE

SUSE WSL in the process of downloading. Note SUSE needs an access code (free) that you get from https://www.suse.com/subscriptions/sles/developer/ while waiting for the download and install is a good time to get that code.

launching WSL on Windows 10

Launching WSL with SUSE, you will be prompted to enter the code mentioned above, if you do not have a code, get it here from SUSE.

completing install of WSL

The WSL installation is very straight forward, enter the SUSE code (Ubuntu did not need a code). Note the Ubuntu and SUSE WSL task bar icons circled bottom center.

Ubuntu and SUSE WSL on Windows 10

Provide a username for accessing the WSL bash shell along with password, confirm how root and sudo to be applied and that is it. Serious, the install for WSL at least with Windows 10 1709 is that fast and easy. Note in the above image, I have WSL with Ubuntu open in a window on the left, WSL with SUSE on the right, and their taskbar icons bottom center.

Windows WSL install error 0x8007007e

Enable Windows Subsystem for Linux Feature on Windows

If you get the above WSL error message 0x8007007e when installing WSL Ubuntu, SUSE or other shell distro, make sure to enable the Windows WSL feature if not already installed.

Windows WSL install error fix

One option is to install additional Windows features via settings or control panel. For example, Control panel -> Programs and features -> Turn Windows features on or off -> Check the box for Windows Subsystem for Linux

Another option is to install Windows subsystem feature via Powershell for example.

enable-windowsoptionalfeature -online  -featurename microsoft-windows-subsystem-linux

Using WSL

Once you have WSL installed, try something simple such as view your present directory:

pwd

Then look at the Windows C: drive location

ls /mnt/c -al

In case you did not notice the above, you can use Windows files and folders from the bash shell by placing /mnt in front of the device path. Note that you need to be case-sensitive such as User vs. user or Documents vs. documents.

As a further example, I needed to change several .htm, .html, .php and .xml files on a Windows system whose contents had not yet changed from https://storageio.com to https://storageio.com. Instead of installing wingrep or some tools, using WSL such as with Ubuntu finding files can be done with grep such as:

grep "https://storageio.com" /mnt/c/Users/*.xml

And then making changes using find and sed such as:

find /mnt/c/Users -name \*.xml -exec sed  -i "s,https://storageio.com,https://storageio.com,g" {} \;

Note that not all Linux apps and tools can use file via /mnt in which case a solution is to create a symbolic link.

For example:

ln -s "/mnt/c/Users/Test1/Documents"  /home/Test1/Projects

Where To Learn More

Learn more about related technology, trends, tools, techniques, and tips with the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

If you primarily work on (or have a preference for) Linux systems and need to do some functions from development to the administration or other activity on a Windows system, Windows Subsystem for Linux (WSL) provides a bash shell to do familiar tasks. Likewise, if you are primarily a Windows person and need to brush up on your Linux skills, WSL can help. If you need to run Linux server applications or workloads, put those into a Docker container, Hyper-V instance or Azure VM.

Overall I like WSL for what it is, a tool that eliminates the need of having to install several other tools to do common tasks, plus makes it easier to work across various Linux and Windows systems including bare metal, virtual and cloud-based. Now that you have been introduced to Windows Subsystems for Linux WSL and an overview including install as well as using, add it to your data infrastructure toolbox.

By the way, if you have not heard, its #Blogtober, check out some of the other blogs and posts occurring during October here.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Server StorageIO May 2016 Update Newsletter

Volume 16, Issue V

Hello and welcome to this May 2016 Server StorageIO update newsletter.

In This Issue

  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events and Webinars
  • Industry Activity Trends
  • Resources and Links
  • Enjoy this shortened edition of the Server StorageIO update newsletter, watch for more tips, articles, lab report test drive reviews, blog posts, videos and podcast’s and in the news commentary appearing soon.

    Cheers GS

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

     

    StorageIO Commentary in the news

    Recent Server StorageIO industry trends perspectives commentary in the news.

    Cloud and Virtual Data Storage Networking: Various comments and discussions

    StorageIOblog: Additional comments and perspectives

    SearchCloudStorage: Comments on OpenIO joins object storage cloud scrum

    SearchCloudStorage: Comments on EMC VxRack Neutrino Nodes and OpenStack

    View more Server, Storage and I/O hardware as well as software trends comments here

     

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    Via Micron Blog (Guest Post): What’s next for NVMe and your Data Center – Preparing for Tomorrow Today

    Check out these resources techniques, trends as well as tools. View more tips and articles here

    StorageIO Webinars and Industry Events

    Brouwer Storage (Nijkerk Holland) June 10-15, 2016 – Various in person seminar workshops

    June 15: Software Defined Data center with Greg Schulz and Fujitsu International

    June 14: Round table with Greg Schulz and John Williams (General manager of Reduxio) and Gert Brouwer. Discussion about new technologies with Reduxio as an example.

    June 10: Hyper converged, Converged , and related subjects presented Greg Schulz

    Simplify and Streamline Your Virtual Infrastructure – May 17 webinar

    Is Hyper-Converged Infrastructure Right for Your Business? May 11 webinar

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    Making the Cloud Work for You: Rapid Recovery April 27, 2016 webinar

    See more webinars and other activities on the Server StorageIO Events page here.

     

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links – Various industry links (over 1,000 with more to be added soon)
    objectstoragecenter.com – Cloud and object storage topics, tips and news items
    storageioblog.com/data-protection-diaries-main/ – Various data protection items and topics
    thenvmeplace.com – Focus on NVMe trends and technologies
    thessdplace.com – NVM and Solid State Disk topics, tips and techniques
    storageio.com/performance.com – Various server, storage and I/O performance and benchmarking

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Ubuntu 16.04 LTS (aka Xenial Xerus) What’s In The Bits and Bytes?

    Ubuntu 16.04 LTS (aka Xenial Xerus) What’s In The Bits and Bytes?

    server storage I/O trends

    Ubuntu 16.04 LTS (aka Xenial Xerus) was recently released (you can get the bits or software download here). Ubuntu is available in various distributions including as a server, workstation or desktop among others that can run bare metal on a physical machine (PM), virtual machine (VM) or as a cloud instance via services such as Amazon Web Services (AWS) as well as Microsoft Azure among others.

    Refresh, What is Ubuntu

    For those not familiar or who need a refresh, Ubuntu is an open source Linux distribution with the company behind it called Canonical. The Ubuntu software is a Debian based Linux distribution with Unity (user interface). Ubuntu is available across different platform architecture from industry standard Intel and AMD x86 32bit and 64bit to ARM processors and even the venerable IBM zSeriues (aka zed) mainframe as part of LinuxOne.

    As a desktop, some see or use Ubuntu as an open source alternative to desktop interfaces based on those from Microsoft such as Windows or Apple.

    As a server Ubuntu can be deployed from traditional applications to cloud, converged and many others including as a docker container, Ceph or OpenStack deployment platform. Speaking of Microsoft and Windows, if you are a *nix bash type person yet need (or have) to work with Windows, bash (and more) are coming to Windows 10. Ubuntu desktop GUI or User Interface options include Unity along with tools such as Compiz and LibreOffice (an alternative to Microsoft Office).

    What’s New In the Bits and Bytes (e.g. Software)

    Ubuntu 16.04 LTS is based on the Linux 4.4 kernel, that also includes Python 3, Ceph Jewel (block, file and object storage) and OpenStack Mitaka among other enhancements. These and other fixes as well as enhancements include:

    • Libvirt 1.3.1
    • Qemu 2.5
    • Open vSwitch 2.5.0
    • NginxLX2 2.0
    • Docker 1.10
    • PHP 7.9
    • MySQL 7.0
    • Juju 2.0
    • Golang 1.6 toolchain
    • OpenSSH 7.2p2 with legacy support along with cipher improvements, including 1024 bit diffie-hellman-group1-sha1 key exchange, ssh-dss, ssh-dss-cert
    • GNU toolchain
    • Apt 1.2

    What About Ubuntu for IBM zSeries Mainframe

    Ubuntu runs on 64 bit zSeries architecture with about 95% binary compatibility. If you look at the release notes, there are still a few things being worked out among known issues. However (read the release notes), Ubuntu 16.04 LTS has OpenStack and Ceph, means that those capabilities could be deployed on a zSeries.

    Now some of you might think wait, how can Linux and Ceph among others work on a FICON based mainframe?

    No worries, keep in mind that FICON the IBM zSeries server storage I/O protocol that co-exists on Fibre Channel along with SCSI_FCP (e.g. FCP) aka what most Open Systems people simply refer to as Fibre Channel (FC) works with the zOS and other operating systems. In the case of native Linux on zSeries, those systems can in fact use SCSI mode for accessing shared storage. In addition to the IBM LinuxOne site, you can learn more about Ubuntu running native on zSeries here on the Ubuntu site.

    Where To Learn More

    What This All Means

    Ubuntu as a Linux distribution continues to evolve and increase in deployment across different environments. Some still view Ubuntu as the low-end Linux for home, hobbyist or those looking for a alternative desktop to Microsoft Windows among others. However Ubuntu is also increasingly being used in roles where other Linux distribution such as Red Hat Enterprise Linux (RHEL), SUSE and Centos among others have gained prior popularity.

    In someway’s you can view RHEL as the first generation Linux distribution that gained popular in the enterprise with early adopters, followed by a second wave or generation of those who favored Centos among others such as the cloud crowd. Then there is the Ubuntu wave which is expanding in many areas along with others such as CoreOS. Granted with some people the preference between one Linux distribution vs. another can be as polarizing as Linux vs. Windows, OpenSystems vs. Mainframe vs. Cloud among others.

    Having various Ubuntu distributions installed across different servers (in addition to Centos, Suse and others), I found the install and new capabilities of Ubuntu 16.04 LTS interesting and continue to explore the many new features, while upgrading some of my older systems.

    Get the Ubuntu 16.04 LTS bits here to give a try or upgrade your existing systems.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    HDDs evolve for Content Application servers

    HDDs evolve for Content Application servers

    hdds evolve server storage I/O trends

    Updated 1/23/2018

    Enterprise HDDs evolve for content server platform

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the seventh and final post in this multi-part series (read part six here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). The focus of this post is comparing how HDD continue to evolve over various generations boosting performance as well as capacity and reliability. This also looks at how there is more to HDD performance than the traditional focus on Revolutions Per Minute (RPM) as a speed indicator.

    Comparing Different Enterprise 10K And 15K HDD Generations

    There is more to HDD performance than RPM speed of the device. RPM plays an important role, however there are other things that impact HDD performance. A common myth is that HDD’s have not improved on performance over the past several years with each successive generation. Table-10 shows a sampling of various generations of enterprise 10K and 15K HDD’s (14) including different form factors and how their performance continues to improve.

    different 10K and 15K HDDs
    Figure-9 10K and 15K HDD performance improvements

    Figure-9 shows how performance continues to improve with 10K and 15K HDD’s with each new generation including those with enhanced cache features. The result is that with improvements in cache software within the drives, along with enhanced persistent non-volatile memory (NVM) and incremental mechanical drive improvements, both read and write performance continues to be enhanced.

    Figure-9 puts into perspective the continued performance enhancements of HDD’s comparing various enterprise 10K and 15K devices. The workload is the same TPC-C tests used earlier in a similar (14) (with no RAID). 100 simulated users are shown in figure-9 accessing a database on each of the different drives all running concurrently. The older 15K 3.5” Cheetah and 2.5” Savio used had a capacity of 146GB which used a database scale factor of 1500 or 134GB. All other drives used a scale factor 3000 or 276GB. Figure-9 also highlights the improvements in both TPS performance as well as lower response time with new HDD’s including those with performance enhanced cache feature.

    The workloads run are same as the TPC-C ones shown earlier, however these drives were not configured with any RAID. The TPC-C activity used Benchmark Factory with similar setup and configuration to those used earlier including on a multi-socket, multi-core Windows 2012 R2 server supporting a Microsoft SQL Server 2012 database with a database for each drive type.

    ENT 10K V3 2.5"

    ENT (Cheetah) 15K 3.5"

    Users

    1

    20

    50

    100

    Users

    1

    20

    50

    100

    TPS (TPC-C)

    14.8

    50.9

    30.3

    39.9

    TPS (TPC-C)

    14.6

    51.3

    27.1

    39.3

    Resp. Time (Sec.)

    0.0

    0.4

    1.6

    1.7

    Resp. Time (Sec.)

    0.0

    0.3

    1.8

    2.1

    ENT 10K 2.5" (with cache)

    ENT (Savio) 15K 2.5"

    Users

    1

    20

    50

    100

    Users

    1

    20

    50

    100

    TPS (TPC-C)

    19.2

    146.3

    72.6

    71.0

    TPS (TPC-C)

    15.8

    59.1

    40.2

    53.6

    Resp. Time (Sec.)

    0.0

    0.1

    0.7

    0.0

    Resp. Time (Sec.)

    0.0

    0.3

    1.2

    1.2

    ENT 15K V4 2.5"

    Users

    1

    20

    50

    100

    TPS (TPC-C)

    19.7

    119.8

    75.3

    69.2

    Resp. Time (Sec.)

    0.0

    0.1

    0.6

    1.0

    ENT 15K (enhanced cache) 2.5"

    Users

    1

    20

    50

    100

    TPS (TPC-C)

    20.1

    184.1

    113.7

    122.1

    Resp. Time (Sec.)

    0.0

    0.1

    0.4

    0.2

    Table-10 Continued Enterprise 10K and 15K HDD performance improvements

    (Note 14) 10K and 15K generational comparisons were run on a separate comparable server to what was used for other test workloads. Workload configuration settings were the same as other database workloads including using Microsoft SQL Server 2012 on a Windows 2012 R2 system with Benchmark Factory driving the workload. Database memory sized was reduced however to only 8GB vs. 16GB used in other tests.

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    A little bit of flash in the right place with applicable algorithms goes a long way, an example being the Seagate Enterprise HDD’s with enhanced cache feature. Likewise, HDD’s are very much alive complementing SSD and vice versa. For high-performance content application workloads flash SSD solutions including NVMe, 12Gbps SAS and 6Gbps SATA devices are cost effective solutions. HDD’s continue to be cost-effective data storage devices for both capacity, as well as environments that do not need the performance of flash SSD.

    For some environments using a combination of flash and HDD’s complementing each other along with cache software can be a cost-effective solution. The previous workload examples provide insight for making cost-effective informed storage decisions.

    Evaluate today’s HDD’s on their effective performance running workloads as close as similar to your own, or, actually try them out with your applications. Today there is more to HDD performance than just RPM speed, particular with the Seagate Enterprise Performance 10K and 15K HDD’s with enhanced caching feature.

    However the Enterprise Performance 10K with enhanced cache feature provides a good balance of capacity, performance while being cost-effective. If you are using older 3.5” 15K or even previous generation 2.5” 15K RPM and “non-performance enhanced” HDD’s, take a look at how the newer generation HDD’s perform, looking beyond the RPM of the device.

    Fast content applications need fast content and flexible content solution platforms such as those from Servers Direct and HDD’s from Seagate. Key to a successful content application deployment is having the flexibility to hardware define and software defined the platform to meet your needs. Just as there are many different types of content applications along with diverse environments, content solution platforms need to be flexible, scalable and robust, not to mention cost effective.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Server virtualization nested and tiered hypervisors

    Storage I/O trends

    Server virtualization nested and tiered hypervisors

    A few years ago I did a piece (click here) about the then emerging trend of tiered hypervisors, particular using different products or technologies in the same environment.

    Tiered snow tools
    Tiered snow management tools and technologies

    Tiered hypervisors can be as simple as using different technologies such as VMware vSphere/ESXi, Microsoft Hyper-V, KVM or Xen in your environment on different physical machines (PMs) for various business and application purposes. This is similar to having different types or tiers of technology including servers, storage, networks or data protection to meet various needs.

    Another aspect is nesting hypervisors on top of each other for testing, development and other purposes.

    nested hypervisor

    I use nested VMware ESXi for testing various configurations as well as verifying new software when needed, or creating a larger virtual environment for functionality simulations. If you are new to nesting which is running a hypervisor on top of another hypervisor such as ESXi on ESXi or Hyper-V on ESXi here are a couple of links to get you up to speed. One is a VMware knowledge base piece, two are from William Lam (@lamw) Virtual Ghetto (getting started here and VSAN here) and the other is from Duncan Epping @DuncanYB Yellow Bricks sites.

    Recently I did a piece over at FedTech titled 3 Tips for Maximizing Tiered Hypervisors that looks at using multiple virtualization tools for different applications and how they can give a number of benefits.

    Here is an excerpt:

    Tiered hypervisors can be run in different configurations. For example, an agency can run multiple server hyper­visors on the same physical blade or server or on separate servers. Having different tiers or types of hypervisors for server and desktop virtualization is similar to using multiple kinds of servers or storage hardware to meet different needs. Lower-cost hypervisors may have lacked some functionality in the past, but developers often add powerful new capabilities, making them an excellent option.

    IT administrators who are considering the use of tiered or multiple hypervisors should know the answers to these questions:

    • How will the different hypervisors be managed?
    • Will the environment need new management tools for backup, monitoring, configuration, provisioning or other routine functions?
    • Do existing tools offer support for different hypervisors?
    • Will the hypervisors have dedicated PMs or be nested?
    • How will IT migrate virtual machines and their guests between different hypervisors? For example if using VMware and Hyper-V, will you use VMware vCenter Multi-Hypervisor Manager or something similar?

    So how about it, how are you using and managing tiered hypervisors?

    Ok, nuff said for now.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    HP Moonshot 1500 software defined capable compute servers

    Storage I/O cloud virtual and big data perspectives

    Riding the current software defined data center (SDC) wave being led by the likes of VMware and software defined networking (SDN) also championed by VMware via their acquisition of Nicira last year, Software Defined Marketing (SDM) is in full force. HP being a player in providing the core building blocks for traditional little data and big data, along with physical, virtual, converged, cloud and software defined has announced a new compute, processor or server platform called the Moonshot 1500.

    HP Moonshot software defined server image

    Software defined marketing aside, there are some real and interesting things from a technology standpoint that HP is doing with the Moonshot 1500 along with other vendors who are offering micro server based solutions.

    First, for those who see server (processor and compute) improvements as being more and faster cores (and threads) per socket, along with extra memory, not to mention 10GbE or 40GbE networking and PCIe expansion or IO connectivity, hang on to your hats.

    HP Moonshot software defined server image individual server blade

    Moonshot is in the model of the micro servers or micro blades such as what HP has offered in the past along with the likes of Dell and Sea Micro (now part of AMD). The micro servers are almost the opposite of the configuration found on regular servers or blades where the focus is putting more ability on a motherboard or blade.

    With micro servers the approach support those applications and environments that do not need lots of CPU processing capability, large amount of storage or IO or memory. These include some web hosting or cloud application environments that can leverage more smaller, lower power, less performance or resource intensive platforms. For example big data (or little data) applications whose software or tools benefit from many low-cost, low power, and lower performance with distributed, clustered, grid, RAIN or ring based architectures can benefit from this type of solution.

    HP Moonshot software defined server image and components

    What is the Moonshot 1500 system?

    • 4.3U high rack mount chassis that holds up to 45 micro servers
    • Each hot-swap micro server is its own self-contained module similar to blade server
    • Server modules install vertically from the top into the chassis similar to some high-density storage enclosures
    • Compute or processors are Intel Atom S1260 2.0GHz based processors with 1 MB of cache memory
    • Single S0-DIMM slot (unbuffered ECC at 1333 MHz) supports 8GB (1 x 8GB DIMM) DRAM
    • Each server module has a single 2.5″ SATA 200GB SSD, 500GB or 1TB HDD onboard
    • A dual port Broadcom 5720 1 Gb Ethernet LAn per server module that connects to chassis switches
    • Marvel 9125 storage controller integrated onboard each server module
    • Chassis and enclosure management along with ACPI 2.0b, SMBIOS 2.6.1 and PXE support
    • A pair of Ethernet switches each give up to six x 10GbE uplinks for the Moonshot chassis
    • Dual RJ-45 connectors for iLO chassis management are also included
    • Status LEDs on the front of each chassis providers status of the servers and network switches
    • Support for Canonical Ubuntu 12.04, RHEL 6.4, SUSE Linux LES 11 SP2

    Storage I/O cloud virtual and big data perspectives

    Notice a common theme with moonshot along with other micro server-based systems and architectures?

    If not, it is simple, I mean literally simple and flexible is the value proposition.

    Simple is the theme (with software defined for marketing) along with low-cost, lower energy power demand, lower performance, less of what is not needed to remove cost.

    Granted not all applications will be a good fit for micro servers (excuse me, software defined servers) as some will need the more robust resources of traditional servers. With solutions such as HP Moonshot, system architects and designers have more options available to them as to what resources or solution options to use. For example, a cloud or object storage system based solutions that does not need a lot of processing performance per node or memory, and a low amount of storage per node might find this as an interesting option for mid to entry-level needs.

    Will HP release a version of their Lefthand or IBRIX (both since renamed) based storage management software on these systems for some market or application needs?

    How about deploying NoSQL type tools including Cassandra or Mongo, how about CloudStack, OpenStack Swift, Basho Riak (or Riak CS) or other software including object storage, on these types of solutions, or web servers and other applications that do not need the fastest processors or most memory per node?

    Thus micro server-based solutions such as Moonshot enable return on innovation (the new ROI) by enabling customers to leverage the right tool (e.g. hard product) to create their soft product allowing their users or customers to in turn innovate in a cost-effective way.

    Will the Moonshot servers be the software defined turnaround for HP, click here to see what Bloomberg has to say, or Forbes here.

    Learn more about Moonshot servers at HP here, here or data sheets found here.

    Btw, HP claims that this is the industries first software defined server, hmm.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved