Cloud Conversations: Revisiting re:Invent 2014 and other AWS updates

November 30, 2014 – 11:38 am
Print Friendly, PDF & Email

server storage I/O trends

This is part one of a two-part series about Amazon Web Services (AWS) re:Invent 2014 and other recent cloud updates, read part two here.

Revisiting re:Invent 2014 and other AWS updates

AWS re:Invent 2014

A few weeks ago I attended Amazon Web Service (AWS) re:Invent 2014 in Las Vegas for a few days. For those of you who have not yet attended this event, I recommend adding it to your agenda. If you have interest in compute servers, networking, storage, development tools or management of cloud (public, private, hybrid), virtualization and related topic themes, you should check out AWS re:invent.

AWS made several announcements at re:invent including many around development tools, compute and data storage services. One of those to keep an eye on is cloud based Aurora relational database service that complement existing RDS tools. Aurora is positioned as an alternative to traditional SQL based transactional databases commonly found in enterprise environments (e.g. SQL Server among others).

Some recent AWS announcements prior to re:Invent include

AWS vCenter Portal

Using the AWS Management Portal for vCenter adds a plug-in within your VMware vCenter to manage your AWS infrastructure. The vCenter for AWS plug-in includes support for AWS EC2 and Virtual Machine (VM) import to migrate your VMware VMs to AWS EC2, create VPC (Virtual Private Clouds) along with subnet’s. There is no cost for the plug-in, you simply pay for the underlying AWS resources consumed (e.g. EC2, EBS, S3). Learn more about AWS Management Portal for vCenter here, and download the OVA plug-in for vCenter here.

AWS re:invent content

AWS Andy Jassy (Image via AWS)

November 12, 2014 (Day 1) Keynote (highlight video, full keynote). This is the session where AWS SVP Andy Jassy made several announcements including Aurora relational database that complements existing RDS (Relational Data Services). In addition to Andy, the key-note sessions also included various special guests ranging from AWS customers, partners and internal people in support of the various initiatives and announcements. CTO Werner Vogels (Image via AWS)

November 13, 2014 (Day 2) Keynote (highlight video, full keynote). In this session, CTO Werner Vogels appears making announcements about the new Container and Lambda services.

AWS re:Invent announcements

Announcements and enhancements made by AWS during re:Invent include:

  • Key Management Service (KMS)
  • Amazon RDS for Aurora
  • Amazon EC2 Container Service
  • AWS Lambda
  • Amazon EBS Enhancements
  • Application development, deployed and life-cycle management tools
  • AWS Service Catalog
  • AWS CodeDeploy
  • AWS CodeCommit
  • AWS CodePipeline

Key Management Service (KMS)

Hardware security module (HSM) based key managed service for creating and control of encryption keys to protect security of digital assets and their keys. Integration with AWS EBS and others services including S3 and Redshift along with CloudTrail logs for regulatory, compliance and management. Learn more about AWS KMS here

AWS Database

For those who are not familiar, AWS has a suite of database related services including SQL and no SQL based, simple to transactional to Petabyte (PB) scale data warehouses for big data and analytics. AWS offers the Relational Database Service (RDS) which is a suite of different database types, instances and services. RDS instance and types include SimpleDB, MySQL, Postgress, Oracle, SQL Server and the new AWS Aurora offering (read more below).  Other little data database and big data repository related offerings include DynamoDB (a non-SQL database), ElasticCache (in memory cache repository) and Redshift (large-scale data warehouse and big data repository).

In addition to database services offered by AWS, you can also combine various AWS resources including EC2 compute, EBS and other storage offerings to create your own solution. For example there are various Amazon Machine Images (AMI’s) or pre-built operating systems and database tools available with EC2 as well as via the AWS Marketplace , such as MongoDB and Couchbase among others. For those not familiar with MongoDB, Couchbase, Cassandra, Riak along with other non SQL or alternative databases and key value repositories, check out Seven Databases in Seven Weeks in my book review of it here.

Seven Databases book review
Seven Databases in Seven Weeks and NoSQL movement available from

Amazon RDS for Aurora

Aurora is a new relational database offering part of the AWS RDS suite of services. Positioned as an alternative to commercial high-end database, Aurora is a cost-effective database engine compatible with MySQL. AWS is claiming 5x better performance than standard MySQL with Aurora while being resilient and durable. Learn more about Aurora which will be available in early 2015 and its current preview here.

Amazon EC2 C4 instances

AWS will be adding a new C4 instance as a next generation of EC2 compute instance based on Intel Xeon E5-2666 v3 (Haswell) processors. The Intel Xeon E5-2666 v3 processors run at a clock speed of 2.9 GHz providing the highest level of EC2 performance. AWS is targeting traditional High Performance Computing (HPC) along with other compute intensive workloads including analytics, gaming, and transcoding among others. Learn more AWS EC2 instances here, and view this Server and StorageIO EC2, EBS and associated AWS primer here.

Amazon EC2 Container Service

Containers such as those via Docker have become popular to support developers rapidly build as well as deploy scalable applications. AWS has added a new feature called EC2 Container Service that supports Docker using simple API’s. In addition to supporting Docker, EC2 Container Service is a high performance scalable container management service for distributed applications deployed on a cluster of EC2 instances. Similar to other EC2 services, EC2 Container Service leverages security groups, EBS volumes and Identity Access Management (IAM) roles along with scheduling placement of containers to meet your needs. Note that AWS is not alone in adding container and docker support with Microsoft Azure also having recently made some announcements, learn more about Azure and Docker here. Learn more about EC2 container service here and more about Docker here.

Docker for smarties

Continue reading about re:Invent 2014 and other recent AWS enhancements here in part two of this two-part series.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2018 Server StorageIO and UnlimitedIO LLC All Rights Reserved

  1. 2 Responses to “Cloud Conversations: Revisiting re:Invent 2014 and other AWS updates”

  2. Hi Greg,

    Thanks for sharing these insights.

    I’d love to hear your views on “Amazon EC2 C4 instances” performance specs as shared by AWS at re-invent.

    Isn’t it time that AWS stopped qualifying the new AWS instance types using old-brick-n-mortar-computing days of physical servers RAM/CPU Processor type/ IOPS Storage? Especially for AWS Lambda, AWS Kinesis, AWS Redshift kind of AWS users who pay in terms of service requests processed per unit time? As compute resources evolve to cloud and IaaS, app developers are offered services and not just compute resources. The pricing too has evolved from one time payment to pay-as-you-go. Then why shouldn’t the performance be quantified by how well a service paid for by users runs on one type of instance vs the other rather than – backed by Haswell and SSD IOPS and old-world RAM numbers? Say – would AWS share data such as for an AWS client or for a particular benchmark, xyz service runs K request/s on instance type X vs 5K requests on say C4 instances. Thoughts?


    By Shaloo on Dec 2, 2014

  3. Hello Shaloo and thanks for your comments/perspectives.

    I too look forward to trying the new C4 instances hopefully sooner vs. later depending on when I can get some time to run some things on them.

    As for if it’s time for AWS to stop qualifying new instances using traditional known metrics, measurements and tools, IMHO that time won’t be until all of the existing legacy applications and workloads get moved to AWS or some other environment. The reason is simple in that for those who are still designing or working with metrics/measurements in the existing/legacy environments, they need to know a corresponding comparison gauge that is as close to apples to apples as possible.

    However having said that, AWS along with other vendors also need to show how their solutions/resources/services perform under differing workloads using metrics that matter with applicable context.

    As for things such as a lambda, the developer should know how long their code takes to execute on a given class of server as well as how much memory it consumes. The reason IMHO that the developer should know this is to make sure the code is properly running as well as establishing a baseline to make future comparisons as well as if it is running normal or note. With today’s tolls it should be easy particular from service providers for a developer to get a report saying your code ran in this amount of time, used these resources, touched or called these routines and libraries, perhaps even receive some tips to optimize…

    As for the best benchmark or comparison, yes would be great to see AWS and others continue to evolve showing workload/application based comparisons with metrics that matter along with applicable context vs. simple speeds and feeds. However in the near-term, knowing the speeds and feeds help to make some approximation comparisons. Thus people should run proof of concepts with their applications in AWS as an example to see how they perform, do some tuning and learning, figure out what they need to do as part of a plan to migrate or start using cloud services, similar to what they would (or should) do in a physical environment.

    Would be interesting to hear your thoughts on what AWS and others should do as an alternative to how they describe and quantify the resource/service capabilities?

    Hope all is well

    Cheers gs

    By Greg Schulz on Dec 2, 2014

Post a Comment

Powered by Disqus