Kevin has been involved in database performance (and porting) optimization for decades which means CPU server, memory, I/O and storage issues, resources and tuning. Part of server, storage I/O a tuning is understanding the workloads, also the demands of software such as databases along with how they use CPU and its impact on resources. This means that somewhere in the technology stack, server CPUs are still needed, even in serverless environments.
We also discuss metrics, gaining insight to resources uses, what they mean including how CPU wait may be costing your lost productivity with overhead, as well as benchmarks, simulations, and related themes. Check out Kevins website www.kevinclosson.net to learn more about Oracle, Databases, SLOB, tools and other content. Listen to the podcast discussion here (42 minutes) as well as on iTunes.
Where to learn more
Learn more about Oracle, Database Performance, Benchmarking along with other tools via the following links:
Server StorageIO.tv (various videos and podcasts, fun and for work)
What this all means and wrap-up
Check out my discussion here with Kevin Closson where you may have some Dejavu, or learn something new on server, storage I/O, database performance, software, benchmark workloads as well as much more. Also available on
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved
Server Storage I/O Insight into Microsoft Windows Server 2016
Updated 12/8/16
In case you had not heard, Microsoft announced the general availability (GA, also known as Release To Manufacturing (RTM) ) of the newest version of its Windows server operating system aka Windows Server 2016 along with System Center 2016. Note that as well as being released to traditional manufacturing distribution mediums as well as MSDN, the Windows Server 2016 bits are also available on Azure.
Windows Server 2016 Welcome Screen – Source Server StorageIOlab.com
For some this might be new news, or a refresh of what Microsoft announced a few weeks ago (e.g. the formal announcement). Likewise, some of you may not be aware that Microsoft is celebrating WIndows Server 20th Birthday (read more here).
Yet for others who have participated in the public beta aka public technical previews (TP) over the past year or two or simply after the information coming out of Microsoft and other venues, there should not be a lot of surprises.
Whats New With Windows Server 2016
Windows Server 2016 Desktop and tools – Source Server StorageIOlab.com
Besides a new user interface including visual GUI and Powershell among others, there are many new feature functionalities summarized below:
Free ebook: Introducing Windows Server 2016 Technical Preview (Via Microsoft Press)
Check out the above free ebook, after looking through it, I recommend adding it to your bookshelf. There are lots of good intro and overview material for Windows Server 2016 to get you up to speed quickly, or as a refresh.
Storage Spaces Direct (S2D) CI and HCI
Storage Spaces Direct (S2D) builds on Storage Spaces that appeared in earlier Windows and Windows Server editions. Some of the major changes and enhancements include ability to leverage local direct attached storage (DAS) such as internal (or external) dedicated NVMe, SAS and SATA HDDs as well as flash SSDs that used for creating software defined storage for various scenarios.
Scenarios include converged infrastructure (CI) disaggregated as well as aggregated hyper-converged infrastructure (HCI) for Hyper-V among other workloads. Windows Server 2016 S2D nodes communicate (from a storage perspective) via a software storage bus. Data Protection and availability is enabled between S2D nodes via Storage Replica (SR) that can do software based synchronous and asynchronous replication.
The following is a Microsoft produced YouTube video providing a nice overview and insight into Windows Server 2016 and Microsoft Software Defined Storage aka S2D.
A common question that comes up with servers, storage, I/O and software defined data infrastructure is what about performance?
Following are some various links to different workloads showing performance for Hyper-V, S2D and Windows Server. Note as with any benchmark, workload or simulation take them for what they are, something to compare that may or might not be applicable to your own workload and environments.
Large scale VM performance with Hyper-V and in-memory transaction processing (Via Technet)
Benchmarking Microsoft Hyper-V server, VMware ESXi and Xen Hypervisors (Via cisjournal PDF)
Server 2016 Impact on VDI User Experience (Via LoginVSI)
Storage IOPS update with Storage Spaces Direct (Via TechNet)
SQL Server workload (benchmark) Order Processing Benchmark using In-Memory OLTP (Via Github)
Setting up testing Windows Server 2016 and S2D using virtual machines (Via MSDN blogs)
Storage throughput with Storage Spaces Direct (S2D TP5 (Via TechNet)
For those of you not as familiar with Microsoft Windows Server and related topics, or that simply need a refresh, here are several handy links as well as resources.
Introducing Windows Server 2016 (Free ebook from Microsoft Press)
While Microsoft Windows Server recently celebrated its 20th birthday (or anniversary), a lot has changed as well as evolved. This includes Windows Servers 2016 supporting new deployment and consumption models (e.g. lightweight Nano, full data center with desktop interface, on-premises, bare metal, virtualized (Hyper-V, VMware, etc) as well as cloud). Besides how consumed and configured, which can also be for CI and HCI modes, Windows Server 2016 along with Hyper-V extend the virtualization and container capabilities into non-Microsoft environments specifically around Linux and Docker. Not only are the support for those environments and platforms enhanced, so to are the management capabilities and interfaces from Powershell to Bash Linux shell being part of WIndows 10 and Server 2016.
What this all means is that if you have not looked at Windows Server in some time, its time you do, even if you are not a WIndows or Microsoft fan, you will want to know what it is that has been updated (perhaps even update your fud if that is the case) to stay current. Get your hands on the bits and try Windows Server 2016 on a bare metal server, or as a VM guest, or via cloud including Azure, or simply leverage the above resources to learn more and stay informed.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved
As I always do, I invite attendees to feel free and follow-up via email, twitter, Linked In, Google+ or other venue with questions, comments, discussions and what they are seeing or running into in their environments.
Some of the many different items discussed during my StorageExpo presentations included:
Recently Hans followed up and sent me some comments and asked if I would be willing to share them with others such as who ever happens to read this. I also suggested to Hans that he also start a blog (here is link to his new blog), and that I would be happy to post his comments for others to see and join in the conversation which are shown below.
Hans Breemer wrote:
Hi Greg,
we met each other recently at the Dutch Storage Expo after one of your sessions. We briefly discussed the current trends in the storage market, and the “risks” or “threats” (read: challenges) it means to “us”, the storage guys. Often neglected by the sales guys…
Please allow me a few lines to elaborate a bit more and share some thoughts from the field. :-)
1. Bigger is not better?
Each iteration in the new disk technologies (SATA or SAS) means we get less IOPS for the bucks. Pound for pound that is. Of course the absolute amount of IOPS we can get from a HDD increases all the time. where 175 IOPS was top speed a few years ago, we sometimes see figures close to 220 IOPS per physical drive now. This looks good in the brochure, just as the increased capacity does. However, what the brochure doesn’t tell us that if we look at the IOPS/capacity ratio, we’re walking backwards. a few years ago we could easily sell over 1000 IOPS/TB. Currently we can’t anymore. We’re happy to reach 500 IOPS/TB. I know this has always been like that. However with the introduction of SATA in the enterprise storage world, I feel things have gotten even worse.
2. But how about SSD’s then?
True and agree. In the world of HDD’s growing bigger and bigger, we actually need SSD’s, and this technology is the way forward in an IOPS perspective. SSD’s have a great future ahead of them (despite being with us already for some time). I do doubt that at the moment SSD’s already have the economical ability to fill the gap though. They offer many of thousands of IOPS, and for dedicated high-end solutions they offer what we weren’t able to deliver for decades. More IOPS than you need! But what about the “1000 IOPS/TB” market? Let’s call it the middle market.
3. SSD’s as a lubricant?
You must have heard every vendor about Adaptive Storage Tiering, Auto Tiering etc. All based on the theorem that most of our IO’s come from a relative small disk section. Thus we can improve the total performance of our array by only adding a few percent of SSD. Smart technology identifies the hot tracks on our disks, and promotes these to SSD’s. We can even demote cold tracks to big SATA drives. Think green, think ecological footprint, etc. For many applications this works well. Regular Windows server, file servers, VMWare ESX server actually seems to like adaptive storage tiering ,and I think I know why, a positive tradeoff of using VMDK’s. (I might share a few lines about FAST VP do’s and dont’s next time if you don’t mind)
4. How about the middle market them you might ask? or, SSD’s as a band-aid?
For the middle market, the above developments is sort of disaster. Think SAP running on Sun Solaris, think the average Microsoft SQL Server, think Oracle databases. These are the typical applications that need “middle market” IOPS. Many of these applications have a freakish IO pattern. OLTP during daytime, backup in the evening and batch jobs at night. Not to mention end of month runs, DTA (Dev-Test-Acceptance) streets that sleep for two weeks or are constantly upgraded or restored. These applications hardly benefit from “smart technologies”. The IO behavior is too random, too unpredictable leading to saturated SATA pools, and EFD’s that are hardly doing more IO’s than the FC drives they’re supposed to relief. Add more SSD’s we’re told. Use less SATA we’re told. but it hardly works. Recently we acquired a few new Vmax arrays without EFD or FASTVP, for the sole purpose of hosting these typical middle market applications. Affordable, predictable performance. But then again, our existing Vmax 20k had full size 600GB 15rpm drives, with the Vmax 40k we’re “encouraged” to use small form factor 600GB 10krpm drives. Again a small step backwards?
5. The storage tiering debacle.
Last but not least, some words I’d like to share with you about storage tiering. We’re encouraged (again) to sell storage in different tiers. Makes sense. To some extent it does yes. Host you most IO eager application on expensive, SSD based storage. And host your DTA or other less business critical application on FC or SATA quality HDD’s. But what if the less business critical application needs to be backed up in the evening, and while doing so completely saturates your SATA pool? Or what if the Dev server creates just as many IO’s as the Prod environment does? People don’t seem to care it seems. To have people realize how much IO’s they actually need and use, we are reporting IO graphs for all servers in our environment. Our tiering model is based on IOPS/TB and IO response time.
Tier X would be expensive, offering 800 IOPS/TB @ avg 10ms Tier Y would be the cheaper option offering 400 IOPS/TB @ avg 15 ms
The next step will be to implement front end controls an actually limit a host to some ceiling. for instance, 2 times the limit described in the tier description. thus allowing for peak loads and backups.
Do we need to? I think so…
Greg, this small message is slowly turning into a plea. And that is actually what it is, a plea to our storage vendors, and to our evangelists. If they want us to deliver, I feel they should talk to us, and listen to us (and you!).
Cheers,
Hans Breemer
ps, I love my job, this world and my role to translate promises and demands into solutions that work for my customers. I do take care though not to create solution that will not work, despite what the brochure said.
pps, please feel free to share the above if needed.
Here is my response to Hans:
Hello Hans good to hear from you and thanks for the comments.
Great perspectives and in the course of talking with your peers around the world, you are not alone in your thinking.
Often I see disconnects between customers and vendors. Vendors (often driven by their market research) they know what the customer needs and issues are, and many actually do. However I often see a reliance on market research data with many degrees of separation as opposed to direct and candied insight. Likewise some vendors spend more time talking about how they listen to the customer vs. how time they actually do so.