Part II Revisting AWS S3 Storage Gateway (Test Drive Deployment)

server storage I/O trends

Part II Revisiting AWS S3 Storage Gateway (Test Drive Deployment)

This Amazon Web Service (AWS) Storage Gateway Revisited posts is a follow-up to the AWS Storage Gateway test drive and review I did a few years ago (thus why it’s called revisited). As part of a two-part series, the first post looks at what AWS Storage Gateway is, how it has improved since my last review of AWS Storage Gateway along with deployment options. The second post in the series looks at a sample test drive deployment and use.

What About Storage Gateway Costs?

Costs vary by region, type of storage being used (files stored in S3, Volume Storage, EBS Snapshots, Virtual Tape storage, Virtual Tape storage archive), as well as type of gateway host, along with how access and used. Request pricing varies including data written to AWS storage by gateway (up to maximum of $125.00 per month), snapshot/volume delete, virtual tape delete, (prorate fee for deletes within 90 days of being archived), virtual tape archival, virtual tape retrieval. Note that there are also various data transfer fees that also vary by region and gateway host. Learn more about pricing here.

What Are Some Storage Gateway Alternatives

AWS and S3 storage gateway access alternatives include those from various third-party (including that are in the AWS marketplace), as well as via data protection tools (e.g. backup/restore, archive, snapshot, replication) and more commonly storage systems. Some tools include Cloudberry, S3FS, S3 motion, S3 Browser among many others.

Tip is when a vendor says they support S3, ask them if that is for their back-end (e.g. they can access and store data in S3), or front-end (e.g. they can be accessed by applications that speak S3 API). Also explore what format the application, tool or storage system stores data in AWS storage, for example, are files mapped one to one to S3 objects along with corresponding directory hierarchy, or are they stored in a save set or other entity.

AWS Storage Gateway Deployment and Management Tips

Once you have created your AWS account (if you did not already have one) and logging into the AWS console (note the link defaults to US East 1 Region), go to the AWS Services Dashboard and select Storage Gateway (or click here which goes to US East 1). You will be presented with three options (File, Volume or VTL) modes.

What Does Storage Gateway and Install Look Like

The following is what installing a AWS Storage Gateway for file and then volume looks like. First, access the AWS Storage Gateway main landing page (it might change by time you read this) to get started. Scroll down and click on the Get Started with AWS Storage Gateway button or click here.

AWS Storage Gateway Landing Page

Select type of gateway to create, in the following example File is chosen.

Select type of AWS storage gateway

Next select the type of file gateway host (EC2 cloud hosted, or on-premises VMware). If you choose VMware, an OVA will be downloaded (follow the onscreen instructions) that you deploy on your ESXi system or with vCenter. Note that there is a different VMware VM gateway OAV for File Gateway and another for Volume Gateway. In the following example VMware ESXi OVA is selected and downloaded, then accessed via VMware tools such as vSphere Web Client for deployment.

AWS Storage Gateway select download

Once your VMware OVA file is downloaded from AWS, install using your preferred VMware tool, in this case I used the vSphere Web Client.

AWS Storage Gateway VM deploy

Once you have deployed the VMware VM for File Storage Gateway, it is time to connect to the gateway using the IP address assigned (static or DHCP) for the VM. Note that you may need to allocate some extra VMware storage to the VM if prompted (this mainly applies to Volume Gateway). Also follow directions about setting NTP time, using paravirtual adapters, thick vs. thin provisioning along with IP settings. Also double-check to make sure your VM and host are set for high-performance power setting. Note that the default username is sguser and password is sgpassword for the gateway.

AWS Storage Gateway Connect

Once you successfully connect to the gateway, next step will be to configure file share settings.

AWS Storage Gateway Configure File Share

Configure file share by selecting which gateway to use (in case you have more than one), name of an S3 bucket name to create, type of storage (S3 Standard or IA), along with Access Management security controls.

AWS Storage Gateway Create Share

Next step is to complete file share creation, not the commands provided for Linux and Windows for accessing the file share.

AWS Storage Gateway Review Share Settings

Review file share settings

AWS Storage Gateway access from Windows

Now lets use the file share by accessing and mounting to a Windows system, then copy some files to the file share.

AWS Storage Gateway verify Bucket Items

Now let’s go to the AWS console (or in our example use S3 Browser or your favorite tool) and look at the S3 bucket for the file share and see what is there. Note that each file is an object, and the objects simply appear as a file. If there were sub-directory those would also exist. Note that there are other buckets that I have masked out as we are only interested in the one named awsgwydemo that is configured using S3 Standard storage.

AWS Storage Gateway Volume

Now lets look at using the S3 Storage Gateway for Volumes. Similar to deploying for File Gateway, start out at the AWS Storage Gateway page and select Volume Gateway, then select what type of host (EC2 cloud, VMware or Hyper-V (2008 R2 or 2012) for on-premises deployment). Lets use the VMware Gateway, however as mentioned above, this is a different OVA/OVF than the File Gateway.

AWS Storage Gateway Configure Volume

Download the VMware OVA/OVF from AWS, and then install using your preferred VMware tools making sure to configure the gateway per instructions. Note that the Volume Gateway needs a couple of storage devices allocated to it. This means you will need to make sure that a SCSI adapter exists (or add one) to the VM, along with the disks (HDD or SSD) for local storage. Refer to AWS documentation about how to size, for my deployment I added a couple of small 80GB drives (you can choose to put on HDD or SSD including NVMe). Note that when connecting to the gateway if you get an error similar to below, make sure that you are in fact using the Volume Gateway and not mistakenly using the File Gateway OVA (VM). Note that the default username is sguser and password is sgpassword for the gateway.

AWS Storage Gateway Connect To Volume

Now connect to the local Volume Storage Gateway and notice the two local disks allocated to it.

AWS Storage Gateway Cached Volume Deploy

Next its time to create the Gateway which are deploying a Volume Cached below.

AWS Storage Gateway Volume Create

Next up is creating a volume, along with its security and access information.

AWS Storage Gateway Volume Settings

Volume configuration continued.

AWS Storage Gateway Volume CHAP

And now some additional configuration of the volume including iSCSI CHAP security.

AWS Storage Gateway Windows Access

Which leads us up to some Windows related volume access and configuration.

AWS Storage Gateway Using iSCSI Volume

Now lets use the new iSCSI based AWS Storage Gateway Volume. On the left you can see various WIndows command line activity, along with corresponding configuration information on the right.

AWS Storage Gateway Being Used by Windows

And there you have it, a quick tour of AWS Storage Gateway, granted there are more options that you can try yourself.

AWS

Where To Learn More

What This All Means

Overall I like the improvements that AWS has made to the Storage Gateway along with the different options it provides. Something to keep in mind is that if you are planning to use the AWS Storage Gateway File serving sharing mode that there are caveats to multiple concurrent writers to the same bucket. I would not be surprised if some other gateway or software based tool vendors tried to throw some fud towards the Storage Gateway, however ask them then how they coordinate multiple concurrent updates to a bucket while preserving data integrity.

Which Storage Gateway variant from AWS to use (e.g. File, Volume, VTL) depends on what your needs are, same with where the gateway is placed (Cloud hosted or on-premises with VMware or Hyper-V). Keep an eye on your costs, and more than just the storage space capacity. This means pay attention to your access and requests fees, as well as different service levels, along with data transfer fees.

You might wonder what about EFS and why you would want to use AWS Storage Gateway? Good question, at the time of this post EFS has evolved from being internal (e.g. within AWS and across regions) to having an external facing end-point however there is a catch. That catch which might have changed by time you read this is that the end-point can only be accessed from AWS Direct Connect locations.

This means that if your servers are not in a AWS Direct Connect location, without some creative configuration, EFS is not an option. Thus Storage Gateway File mode might be an option in place of EFS as well as using AWS storage access tools from others. For example I have some of my S3 buckets mounted on Linux systems using S3FS for doing rsync or other operations from local to cloud. In addition to S3FS, I also have various backup tools that place data into S3 buckets for backup, BC and DR as well as archiving.

Check out AWS Storage Gateway yourself and see what it can do or if it is a fit for your environment.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Another StorageIO Hybrid Momentus Moment

Its been a few months since my last post (read it here) about Hybrid Hard Disk Drive (HHDD) such as the Seagate Momentus XT that I have been using.

The Momentus XT HHDD I have been using is a 500GB 7,200RPM 2.5 inch SATA Hard Disk Drive (HDD) with 4GB of embedded FLASH (aka SSD) and 32MB of DRAM memory for buffering hence the hybrid name.

I have been using the XT HHDD mainly for transferring large multi GByte size files between computers and for doing some disk to disk (D2D) backups while becoming more comfortable with it. While not as fast as my 64GB all flash SSD, the XT HHDD is as fast as my 7,200RPM 160GB Momentus HDD and in some cases faster on burst reads or writes. The notion of having a 500GB HDD that was affordable to support D2D was attractive however the ability to get some performance boost now and then via the embedded 4GB FLASH opens many different possibilities particularly when combined with compression.

Recently I switched the role of the Momentus XT HHDD from that of being a utility drive to becoming the main disk in one of my laptops. Despite many forums or bulletin boards touting issues or problems with the Seagate Momentus XT causing system hangs or Windows Blue Screen of Death (BSoD), I continued on with the next phase of testing.

Making the switch to XT HHDD as a primary disk

I took a few precaution including eating some of my own dog food that I routinely talk about. For example, I made sure that the Lenovo T61 where the Momentus XT was going to be installed was backed up. In addition, I synced my traveling laptop so that it was the primary so that I could continue working during the conversion not to mention having an extra copy in addition to normal on and offsite backups.

Ok, lets get back to the conversion or migration from a regular HDD to the HHDD.

Once I knew I had a good backup, I used the Seagate Discwizard (e.g. Acronis based) tool for imaging the existing T61 HDD to the Momentus XT HHDD. Using Discwizard (you could use other tools as well) I configured it to initialize the HHDD which was attached via a Seagate Goflex USB to SATA cable kit as well as image or copy the contents of the T61 HDD partitions to the Momentus XT. During the several hours it took to copy and create a new bootable disk image on the HHDD I continued working on my travel or standby laptop.

After the image copy was completed and verified, it was time to reboot and see how Windows (XP SP3) liked the HHDD which all seemed to be normal. There were some parts of the boot that seemed a bit faster, however not 100 percent conclusive. The next step was to shutdown the laptop and physically swap the old internal HDD with the HHDD and reboot. The subsequent boot did seem faster and programs accessing large files also seemed to run a bit faster.

Keep in mind that the HHDD is still a spinning 7,200RPM disk drive so comparisons to a full time SSD would be apples to oranges as would the cost capacity difference between those devices. However, for what I wanted to see and use, the limited 4GB of flash does seem to provide a performance boost and if I needed full time super fast performance, I could buy a larger capacity SSD and install it. Im going to hold off on buying any more larger capacity flash SSD for the time being however.

Do I see HHDD appearing in SMB, SME or enterprise storage systems anytime soon? Probably not, at least not in primary storage systems. However perhaps in some D2D backup, archive or dedupe and VTL devices or other appliances.

Momentus XT Speed Bumps

Now, to be fair, there have been some bumps in the road!

The first couple of days were smooth sailing other than hearing the mystery chirp the HHDD makes a couple of times a day. Low and behold after a couple of days, just as many forums had indicated, a mystery system hang occurred (and no, not like Windows might normally do so for those Microsoft cynics). Other than the inconvenience of a reboot, no data was lost as files being updated were saved or had been backed up not to mention after the reboot, everything was intact anyway. So far just an inconvenience or so I thought.

Almost 24 hours later, same thing except this time I got to see the BSoD which candidly, I very rarely see despite hearing stories from others. Ok, this was annoying, however as long as I did not lose any data, other than lost time from a reboot, lets chalk this up to a learning experience and see where it goes. Now guess what, about 12 hours later, once again, the system froze up and this time I was in the middle of a document edit. This time I did lose about 8 minutes of typing data that had not been auto saved (I have since changed my auto save from 10 minutes to 5 minutes).

With this BSoD incident, I took some notes and using the X61s, started checking some web sites and verified the BIOS firmware on the T61 which was up to date. However I noticed that the Seagate Momentus XT HHDD was at firmware 22 while there was a 23 version available. Reading through some web sites and forums, I was on the fence on trying firmware 23 given that it appears a newer firmware version for the HHDD is in the works. Deciding to forge forward with the experiment, after all, no real data loss had occurred, and I still had the X61s not to mention the original T61 HDD to fall back to worse case.

Going to the Seagate web site, I downloaded the firmware 23 install kit and ran it to their instructions which was a breeze and then did the reboot.

It has not been quite a week yet, however knocking on wood, while I keep expecting to see one, no BSoD or system freezes have occurred. However having said that and knocking on wood, Im also making sure things are backed up protected and ready if needed. Likewise, if I start to see a rash of BSoD, my plan is to fall back to the original T61 HDD, bring it up to date and use it until a newer HHDD firmware version is available to resume testing.

What is next for my Seagate Momentus XT HHDD?

Im going to wait to see if the BSoD and mystery system hangs disappear as well as for the arrival of the new firmware followed by some more testing. However, when Im confident with it, the next step is to put the XT HHDD into the X61s which is used primarily for travel purpose.

Why wait? Simple, while I can tolerate a reboot or crash or data loss or disruption while in the office given access to copies as well as standby or backup systems to work from, when traveling options are more limited. Sure if there is data loss, I can go to my cloud provider and rapidly recall a file or multiple ones as needed or for critical data, recover from a portable encrypted USB device. Consequently I want more confidence in the XT HHDD before deploying it for travel mode which it is probably safe to do as of now, however I want to see how stable it is in the office before taking it on the road.

What does this all mean?

  • Simple, have a backup of your data and systems
  • Test and verify those backups or standby systems periodically
  • Have a fall back plan for when trying new things
  • Keep productivity in mind, at some point you may have to fall back
  • If something is important enough to protect, have multiple copies
  • Be ready to eat your own dog food or what you talk about
  • Do not be scared, however be prepared, look before you leap

How about you are you using a HHDD yet and if so, what are your experiences? I am curious to hear if anyone has tried using a HHDD in their VMware lab environments yet in place of a regular HDD or before spending a boat load of money for a similar sized SSD.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved