Table of Contents
1. Executive Summary
The fundamental underpinning of an organization is its transactions. They must be done well, with integrity and performance. Not only has transaction volume soared recently, but the level of granularity in the details has reached new heights. Fast transactions greatly improve the efficiency of a high-volume business. Performance is incredibly important.
There are a variety of databases available for transactional applications. Ideally, any database would have the required capabilities; however, depending on the application’s scale and the chosen cloud, some database solutions can be prone to delays. Recent information management trends see organizations shifting their focus to cloud-based solutions. In the past, the only clear choice for most organizations was on-premises data using on-premises hardware. However, the costs of scale have chipped away at the notion it is the best approach for some, if not all, a company’s transactional needs. The factors driving operational and analytical data projects to the cloud are many. Still, advantages like data protection, high availability, and scale are realized with infrastructure as a service (IaaS) deployment. In many cases, a hybrid approach serves as an interim step for organizations migrating to a modern, capable cloud architecture.
This report outlines the results from two GigaOm Field Tests (one transactional and the other analytic) derived from the industry-standard TPC Benchmark™ E (TPC-E) and TPC Benchmark™ H (TPC-H) to compare two IaaS cloud database offerings:
- Microsoft SQL Server on Amazon Web Services (AWS) Elastic Cloud Compute (EC2) instances.
- Microsoft SQL Server Microsoft on Azure Virtual Machines (VM).
Both are installations of Microsoft SQL Server 2019, and we tested Windows Server OS using the most recent versions available as a pre-configured machine image.
Data-driven organizations also rely on analytic databases to load, store, and analyze volumes of data at high speed to derive timely insights. Data volumes within modern organizations’ information ecosystems are rapidly expanding, placing significant performance demands on legacy architectures. Today, to fully harness their data to gain a competitive advantage, businesses need modern, scalable architectures and high levels of performance and reliability to provide timely analytical insights. In addition, many companies like fully managed cloud services. With fully managed as-a-service deployment models, companies can leverage powerful data platforms without the technical debt and burden of finding talent to manage the resources and architecture in-house. With these models, users only pay as they play and can stand up a fully functional analytical platform in the cloud with just a few clicks.
The results of the GigaOm Transactional Field Test are valuable to all operational functions of an organization, such as human resource management, production planning, material management, financial supply chain management, sales and distribution, financial accounting and controlling, plant maintenance, and quality management. The Analytic Field Test results are insightful for many of these same departments today using SQL Server, which is frequently the source for interactive business intelligence (BI) and data analysis.
Testing hardware and software across cloud vendors is challenging. Configurations favor one cloud vendor over another in feature availability, virtual machine processor generations, memory amounts, storage configurations for optimal input/output, network latencies, software and operating system versions, and the benchmarking workload. Our testing demonstrates a narrow slice of potential configurations and workloads.
Azure SQL Server 2019 Enterprise on Windows’ best transactions per second (tps) was 42% higher than AWS SQL Server 2019 Enterprise on Windows. Azure SQL Server 2019 Enterprise on Windows’ best queries per hour (QPH) had 41% higher QPH than AWS SQL Server 2019 Enterprise on Windows Server.
When it comes to transaction processing, Azure’s price-performance is 23% less expensive than the price-performance of AWS SQL Server on Windows without license mobility. With license mobility, Azure SQL Server on Windows provided price-performance that was 27% less expensive than AWS. Azure price-performance is almost 31% less expensive than the price-performance of AWS SQL Server on Windows with license mobility and a three-year commitment.
For analytic processing, the price-performance of Azure SQL Server on Windows without license mobility proved to be 21% less expensive than AWS. With license mobility in place, the price-performance advantage for Azure widened to 23%. And for Azure SQL Server on Windows with license mobility and a three-year commitment, price-performance was 25% less expensive than AWS.
As the report sponsor, Microsoft selected the particular Azure configuration it wanted to test. GigaOm selected the closest AWS instance configuration for CPU, memory, and disk configuration.
We leave the issue of fairness for the reader to determine. We strongly encourage you to look past marketing messages and discern what is of value. We hope this report is informative and helpful in uncovering some of the challenges and nuances of platform selection.
In the same spirit of the TPC, price-performance intends to be a normalizer of performance results across different configurations. Of course, this has its shortcomings, but at least one can determine “what you pay for and configure is what you get.”
The parameters to replicate this test are provided. We used the BenchCraft tool, audited by a TPC-approved auditor who reviewed all updates to BenchCraft. All the information required to reproduce the results are documented in the TPC-E specification. BenchCraft implements the requirements documented in Clauses 3, 4, 5, and 6 of the benchmark specification. There is nothing in BenchCraft that alters the performance of TPC-E or this TPC-E-derived workload.
The scale factor in TPC-E is defined as the number of required customer rows per single tpsE. We changed the number of initial trading days (ITD). The default value is 300, which is the number of eight-hour business days to populate the initial database. For these tests, we used an ITD of 30 days rather than 300. This reduces the size of the initial database population in the larger tables. The overall workload behaves identically with ITD of 300 or 30 as far as the transaction profiles are concerned. Since the ITD was reduced to 30, any results would not be compliant with the TPC-E specification and, therefore, not comparable to published results. This is the basis for the standard disclaimer that this is a workload derived from TPC-E.
However, BenchCraft is just one way to run TPC-E. All the information necessary to recreate the benchmark is available at TPC.org (this test used the latest version, 1.14.0). Just change the ITD, as mentioned above.
We have provided enough information in the report for anyone to reproduce these tests. You are encouraged to compile your own representative queries, data sets, data sizes, and test compatible configurations applicable to your requirements.
2. Cloud IaaS SQL Server Offerings
Relational databases are a cornerstone of an organization’s data ecosystem. While alternative SQL platforms are growing with the data deluge, and have their place, workload platforming decision-makers usually choose the relational database. This is for a good reason. Since 1989, Microsoft SQL Server has proliferated to near-ubiquity as the relational server of choice for the original database use case—online transaction processing (OLTP)—and beyond. Now SQL Server is available on fully functional infrastructure offered as a service, taking complete advantage of the cloud. These infrastructure-as-a-service (IaaS) cloud offerings provide predictable costs, savings, fast response times, and strong non-functionals.
As our testing confirms, the main difference between SQL Server on Azure and SQL Server on AWS is the storage I/O performance.
Microsoft SQL Server on Azure Virtual Machines Storage Options
Azure recommends either Premium Managed Disk or Ultra Disk for operationally intensive, business-critical workloads. While Ultra Disk is Azure’s high-end disk, we chose to test Premium Managed Disks. Premium Managed Disks are high-performance SSDs designed to support I/O intensive workloads and provide high throughput and low latency, but with a balanced cost compared to Ultra Disk. Premium SSD Managed Disks are provisioned as a persistent disk with configurable size and performance characteristics. They can also be detached and reattached to different virtual machines.
The cost of Premium SSD Managed Disks depends on the number and size of the disks selected and the number of outbound data transfers. These disk sizes provide different IOPS, throughput (MB/second), and monthly price per GiB. Several persistent disks attached to a VM can support petabytes of storage per VM. Premium disks can get up to 5,000 IOPS and 200 MB per second each—which translates to less than one millisecond latency for read operations with applications that take advantage of read caching. Premium SSD Managed Disks are supported by DS-series, FS-series, and GS-series VMs. The largest single disk is the P80 with 32 TB of storage, IOPS up to 20,000, and 900 MB per second of throughput.
For additional performance, Azure offers local cache options of read/write and read-only cache. The local cache is a specialized component that stores data, typically in memory, for easy access. Read cache attempts to reduce read latency, while write cache data to be written to permanent storage is queued in the cache. This feature is not available on all disk types, nor is it available for temporary storage disks. Also, not all applications can leverage cache. According to Microsoft:
Caching uses specialized, sometimes expensive, temporary storage with faster read and write performance than permanent storage. Because cache storage is often limited, decisions must be made about what data operations benefit most from caching. But even where the cache can be made widely available, such as in Azure, it’s still important to know the workload patterns of each disk before deciding which caching type to use.
Use of write cache could cause data loss. Specifically, in regards to write caching, Microsoft cautions:
If you are using read/write caching, you must have a proper way to write the data from cache to persistent disks. For example, SQL Server handles writing cached data to the persistent storage disks on its own. Using read/write cache with an application that does not handle persisting the required data can lead to data loss if the VM crashes.
Microsoft SQL Server on Amazon Web Services Elastic Cloud Compute (AWS EC2) Instances Storage Options
Amazon Web Services offers Elastic Block Store (EBS) as an easy-to-use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2). EBS supports a range of workloads, like relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, and file systems. With EBS, AWS customers can choose from four volume types to balance optimal price and performance. You can achieve single-digit, millisecond-latency for high-performance database workloads.
Amazon EBS has three different types of solid-state drives: General Purpose SSDs (gp2 and gp3) and Provisioned IOPS SSD (io1). Provisioned IOPS disks are more akin to Azure Ultra Disk, so we chose General Purpose SSD (gp3) volumes to balance price and performance for these workloads. AWS recommends this drive type for most workloads. Gp2 tends to underperform unless relegated to system boot volumes, virtual desktops, low-latency applications, and development/test environments.
For the test, we chose one of AWS’s Nitro-based instances. Like Azure, for the SQL Server temporary database (tempdb), we used a solid-state drive.
One of our main objectives in this benchmark is to test an I/O intensive workload on Amazon and Azure’s speed-cost balanced SSD volume types head to head. We want to understand both the performance and price-per-performance differences between two leading cloud vendors’ SQL Server offerings. AWS does not offer the local cache feature available in Azure, so we wanted to see the difference in performance when read-only cache was enabled on Azure data disks compared to AWS without the benefit of the local cache.
3. Field Test Setup
GigaOm Transactional Field Test
The GigaOm Transactional Field Test is a workload derived from the well-recognized industry-standard TPC Benchmark™ E (TPC-E). Aspects of the workload, such as transaction mix, were modified from the standard TPC-E benchmark for ease of benchmarking, and as such, the results generated are not comparable to official TPC Results. From tpc.org:
TPC Benchmark™ E (TPC-E) is an OLTP workload. It is a mixture of read-only and update-intensive transactions that simulate the activities found in complex OLTP application environments. The database schema, data population, transactions, and implementation rules have been designed to broadly represent modern OLTP systems. The benchmark exercises a breadth of system components associated with such environments.
The TPC-E benchmark simulates the transactional workload of a brokerage firm with a central database that executes transactions related to the firm’s customer accounts. The data model consists of 33 tables, 27 of which have the 50 foreign key constraints. The TPC-E results are valuable to all operational functions of an organization, many driven by SQL Server and frequently the source for operational interactive business intelligence (BI).
Field Test Data
The data sets used in the benchmark were generated based on the information provided in the TPC Benchmark™ E (TPC-E) specification. For this testing, we used the database scaled for 1 million customers. This scaling determined the initial data volume of the database. For example, a total of 800,000 customers is multiplied by 17,280 to determine the number of rows in the TRADE table: 13,824,000,000. All of the other tables were scaled according to the TPC-E specification and rules. Besides the scale factor, the test offers a few other “knobs” we turned to determine the database engine’s maximum throughput capability for AWS and Azure.
We completed three runs per test on each platform, with each lasting at least two hours. We then took the average transactions per second for the last 30 minutes of the test runs. A full backup was also restored to reset the database to its original state between each run. The results are shared in the Field Test Results section.
Database Environments
Selecting and sizing the compute and storage for comparison is challenging, particularly across two different cloud vendors’ offerings. There are various offerings between AWS and Azure for transaction-heavy workloads. As you will see in Table 2, there was not an exact match in processors or memory at the time of testing and publication.
We considered the variety of offerings on AWS and selected the memory-optimized R5b family. We used R5b in previous testing and believe it is a solid performer. Its description is similar to the Azure offering. R5b is described as “optimized for memory-intensive and latency-sensitive database workloads, including data analytics, in-memory databases, and high-performance production workloads.”
On the Azure side, we expect mission-critical-minded customers to gravitate toward the Ebdsv5 family, described as delivering “higher remote storage performance in each VM size compared to Ev5 VM series.” The Ebdsv5 allows up to 120,000 IOPS and 4,000 MBps of remote disk storage throughput. Our approach was to find the “nearest neighbor” best fit. The challenge was selecting a balance of both CPU and memory. R5b.8xlarge on AWS has 32 vCPUs and 256 GiB memory. The E32bdsv5 on Azure offers a 32-core instance and has 256 GiB of memory, the same as the r5b.8xlarge. This was our best, most diligent effort at selecting compute hardware compatibility for our testing.
In terms of storage, our objective was to test both Azure Premium Disks and AWS General Purpose (gp3). For the Transactional Field Test, we utilized read-only cache on Azure. For both Azure and AWS, we deployed multiple disks for SQL Server data and log files and combined them using Simple Storage Pools (RAID0 disk striping) with Windows. Azure recommended striping the disks because of the design of the platform. Azure Premium Storage is offered in pre-fixed sizes and IOPS limits. Thus, striping of equal size disks is the publicly documented and recommended best practice to achieve desired size/IOPS configuration. On AWS, we also provisioned a set of gp3 volumes also striped with Storage Pools on Windows.
Another configuration difference that may have impacted our results was that for the Azure virtual machine, we used the locally attached temporary storage for the SQL Server “tempdb” database. The AWS EC2 r5b instances do not have locally-attached storage, and the tempdb was placed on the root drive. The SQL Server tempdb stores internal objects created by the database engine, such as work tables for sorts, spools, hash joins, hash aggregates, and intermediate results. Having the tempdb on local temporary storage usually means higher I/O performance.
The Azure configuration had 90,000 total IOPS, and AWS had 86,667 total IOPS. With AWS gp3 drives, you can choose the IOPS; thus, we arranged the disks to give the maximum allowed IOPS per instance. We worked to employ equivalent configurations for these tests despite the different storage profiles between AWS and Azure.
Results may vary across different configurations, and again, you are encouraged to compile your own representative queries, data sets, data sizes, and test-compatible configurations applicable to your requirements. All told, our testing included two different database environments.
Table 1. Configurations Used for Tests
Cloud | AWS | Azure |
---|---|---|
Database | SQL Server 2019 Enterprise on Windows Server 2019 Datacenter | SQL Server 2019 Enterprise on Windows Server 2022 Datacenter |
Build* | Microsoft SQL Server 2019 (RTM-CU12) (KB5004524) - 15.0.4153.1 (X64) Jul 19 2021 15:37:34 Enterprise Edition: Core-based Licensing (64-bit) Windows Server 2019 Datacenter 10.0 <X64> (Build 17763) (Hypervisor) | Microsoft SQL Server 2019 (RTM-CU15) (KB5008996) - 15.0.4198.2 (X64) Jan 12 2022 22:30:08 Enterprise Edition: Core-based Licensing (64-bit) Windows Server 2022 Datacenter 10.0 <X64> (Build 20348) (Hypervisor) |
Region | Oregon | North Central US |
Instance Type | r5b.8xlarge | E32bds_v5 |
vCPU | 32 | 32 |
RAM (GiB) | 256 | 256 |
Storage Configuration | 5x 2TB gp3 (14,733iops 420MB/s) data 1x 1TB gp3 (10,000iops 200MB/s) log 1x 0.5TB gp3 (3000iops 200MB/s) root | 16x P30 1TB (5,000iops 200MB/s) data (read-only cache); 2x P30 1TB (5,000iops 200MB/s) log (no cache) |
Total IOPS | 86,667 | 90,000 |
Source: GigaOm 2022 |
*At the time of testing, AWS did not offer an Amazon Machine Image with both SQL Server 2019 CU15 and Windows Server 2022 in the EC2 Launch Wizard or AWS Launch Wizard for SQL Server. We used the latest offered, which was SQL Server 2019 CU12 on Windows Server 2019.
Other SQL Server settings include:
- Max degree of parallelism: 1
- Max server memory: 235,930 MB, which is 90% of total available system memory (256GB)
GigaOm Analytical Field Test
The setup for this Field Test was informed by the TPC Benchmark™ H (TPC-H) spec validation queries. This is not an official TPC benchmark. The queries were executed using the following setup, environment, standards, and configurations.
Database Environments
The following table shows the configurations we used for the Analytic Field Test. The main difference is a slightly different data drive configuration for Azure. Results may vary across different configurations, and again, you are encouraged to compile your own representative queries, data sets, data sizes, and test-compatible configurations applicable to your requirements. All told, our testing included two different database environments.
Table 2. Configurations Used for Tests
Cloud | AWS | Azure |
---|---|---|
Database | SQL Server 2019 Enterprise on Windows Server 2019 Datacenter | SQL Server 2019 Enterprise on Windows Server 2022 Datacenter |
Build* | Microsoft SQL Server 2019 (RTM-CU12) (KB5004524) - 15.0.4153.1 (X64) Jul 19 2021 15:37:34 Enterprise Edition: Core-based Licensing (64-bit) Windows Server 2019 Datacenter 10.0 <X64> (Build 17763) (Hypervisor) | Microsoft SQL Server 2019 (RTM-CU15) (KB5008996) - 15.0.4198.2 (X64) Jan 12 2022 22:30:08 Enterprise Edition: Core-based Licensing (64-bit) Windows Server 2022 Datacenter 10.0 <X64> (Build 20348) (Hypervisor) |
Region | Oregon | North Central US |
Instance Type | r5b.8xlarge | E32bds_v5 |
vCPU | 32 | 32 |
RAM (GiB) | 256 | 256 |
Storage Configuration | 5x 2TB gp3 (14,733iops 420MB/s) data 1x 1TB gp3 (10,000iops 200MB/s) log 1x 0.5TB gp3 (3000iops 200MB/s) root | 18x P30 1TB (5,000iops 200MB/s) data (no cache) 2x P30 1TB (5,000iops 200MB/s) log (no cache) |
Total IOPS | 86,667 | 100,000 |
Source: GigaOm 2022 |
*At the time of testing, AWS did not offer an Amazon Machine Image with both SQL Server 2019 CU15 and Windows Server 2022 in the EC2 Launch Wizard or AWS Launch Wizard for SQL Server. We used the latest offered, which was SQL Server 2019 CU12 on Windows Server 2019.
Other SQL Server settings include:
- Max degree of parallelism: 1
- Max server memory: 235,930 MB, which is 90% of total available system memory (256GB)
Benchmark Data
The data sets used in the benchmark were data sets built from the well-recognized industry-standard TPC Benchmark™ H (TPC-H). Aspects of the workload were modified from the standard TPC-H benchmark for ease of benchmarking, and as such, the results generated are not comparable to official TPC Results.
From tpc.org: “The TPC-H is a decision support benchmark. It consists of a suite of business-oriented ad-hoc queries and concurrent data modifications. The queries and the data populating the database were chosen to have broad industry-wide relevance. This benchmark illustrates decision support systems that examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions.”
For more information about the TPC-H, see their specification document.
The following table gives row counts of the database when loaded with 10 TB of TPC-H-like data to provide an idea of the data volumes used in our benchmark:
Table 3. TPC-H Database Row Count Given 10 TB
TPC-H Table | 10 TB Row Count |
---|---|
Customer | 150,000,000 |
Line Item | 6,000,000,000 |
Orders | 1,500,000,000 |
Part | 200,000,000 |
Supplier | 10,000,000 |
Part Supp | 800,000,000 |
Source: GigaOm 2022 |
Queries
We sought to replicate the TPC-H Benchmark queries modified only by syntax differences required by SQL Server. The benchmark is a fair representation of enterprise query needs. The TPC-H testing suite has 22 queries.
Test Execution
To execute the TPC-H Benchmark queries, we ran the test sequence of Power Run, Power Run, Throughput Run. These were read-only queries for both Power and Throughput. A Power Run is a single user executing 22 queries in a serial stream. A Throughput Run is seven concurrent users each executing a stream of the 22 queries, giving 154 query executions with seven parallel streams. We completed each test sequence three times and took the best result.
Test Metric
We used the Throughput Run to calculate the performance metric of queries per hour (QPH). We used the longest running of the seven concurrent threads as the total execution time of the test. To calculate the QPH, we used the following formula:
QPH = 22 queries/Throughput execution time (sec) x 3,600 seconds per hour
4. Field Test Results
Transactional Field Test Results
This section analyzes the transactions per second (tps) from the fastest of the five runs of the three GigaOm Transactional Field Tests described in the table below. A higher tps is better, meaning that more transactions are processed every second.
Figure 1 shows that Azure SQL Server 2019 Enterprise on Windows 2022’s best tps was 42% higher than AWS SQL Server 2019 Enterprise on Windows 2022.
Figure 1. Transactions per Second: Azure Premium SSD Managed Disks vs. AWS EBS General Purpose SSD (gp2). Higher is better.
Analytic Field Test Results
Figure 2 reveals that Azure SQL Server 2019 Enterprise on Windows 2022’s best queries per hour (QPH) was 41% higher than AWS SQL Server 2019 Enterprise on Windows 2022.
Figure 2. Queries per Hour: Azure Premium SSD Managed Disks vs. AWS EBS General Purpose SSD (gp2). Higher is better.
5. Price Per Performance
The price-performance metric is price/throughput (tps). This is defined as the cost of running each cloud platform continuously for three years divided by the transactions per second throughput uncovered in the previous tests. The calculation is as follows:
Price Per Performance = $/tps =
[(Compute with on-demand SQL Server Hourly Rate × 24 hours/day × 365 days/year × 3 years) + (Data disk(s) monthly cost per disk × # of disks × 12 months × 3 years) + (Log disk monthly cost per disk × # of disks × 12 months × 3 years)] ÷ tps
When evaluating price per performance, the lower the number, the better. This means you get more compute power, storage I/O, and capacity for your budget.
Pricing Used:
We performed this calculation across two different pricing structures:
- Azure pay-as-you-go versus AWS on-demand
- Azure three-year reserved versus AWS standard three-year term reserved
The prices were at the time of testing and reflect the US West 2 region on AWS and North Central US region on Azure. The compute prices include both the actual AWS EC2/Azure VM hardware itself and the license costs of the operating system and Microsoft SQL Server Enterprise Edition. We also included Azure Hybrid Benefit versus AWS License Mobility rates for existing SQL Server license holders. Rate details are in the Appendix.
Please note prices do not include support costs for either Azure or AWS. Each platform has different pricing options. Buyers should evaluate all of their pricing choices, not just those presented in this paper.
Transactional Field Test Price-Performance
Figure 3 shows that Azure price-performance is 23% less expensive than the price-performance of AWS SQL Server on Windows for pay-as-you-go/on-demand price-performance. Note that in this chart a lower price-performance is better—meaning that it costs less to complete the same workload.
Figure 3. Price-Performance, Transactions per Second: Azure vs. AWS on Windows 2022, Pay-As-You-Go Without License Mobility. Lower is better.
Figure 4 reveals that the price-performance of Azure SQL Server on Windows 2022 with pay-as-you-go pricing and license mobility is 27% less expensive than AWS.
Figure 4. Price-Performance, Transactions per Second: Azure vs. AWS on Windows 2022, Pay-As-You-Go With License Mobility. Lower is better.
We also tested SQL Server 2019 Enterprise on Windows 2022 with a three-year commitment and SQL Server License Mobility. As shown in Figure 5, Azure price-performance proved to be 31% less expensive than the price-performance of AWS SQL Server on Windows with license mobility and a three-year commitment.
Figure 5. Price-Performance, Transactions per Second: Azure vs. AWS on Windows 2022, Pay-As-You-Go With License Mobility and a Three-Year Commitment. Lower is better.
Analytic Field Test Price-Performance
Starting with Figure 6, the focus shifts to explore price-performance of Azure and AWS SQL Server deployments based on queries per hour. In this first test, the price-performance of Azure SQL Server on Windows 2022 with pay-as-you-go pricing and without license mobility proved to be 21% less expensive than AWS. Azure with its lower price-performance value shows that it costs less to complete the same workload on Azure than on AWS.
Figure 6. Price-Performance, Queries per Hour: Azure vs. AWS on Windows 2022, Pay-As-You-Go Queries per Hour Without License Mobility. Lower is better.
Next, Figure 7 compares the price-performance of Azure and AWS SQL Server deployments, based on Windows 2022 and with license mobility and pay-as-you-go/on-demand pricing. Here, Azure price-performance proved 23% less expensive than AWS.
Figure 7. Price-Performance, Queries per Hour: Azure vs. AWS on Windows 2022 With License Mobility and Pay-As-You-Go/On-Demand Pricing. Lower is better.
Finally, Figure 8 explores the price-performance of Azure and AWS SQL Server deployments on Windows 2022 with license mobility and a three-year license. The testing reveals that Azure price-performance is 25% less expensive than AWS.
Figure 8. Price-Performance, Queries per Hour: Azure vs. AWS on Windows 2022 With License Mobility, and a Three-Year Commitment. Lower is better.
6. Conclusion
This report outlines the results from a GigaOm Transactional Field Test and a GigaOm Analytical Field Test to compare the same SQL Server infrastructure as a service (IaaS) offering of two cloud vendors: Microsoft SQL Server on Amazon Web Services (AWS) Elastic Cloud Compute (EC2) instances and Microsoft SQL Server Microsoft on a Windows Azure Virtual Machines (VM).
We have learned that the database, cloud, and storage all matter to latency, which is a killer for important transactional applications. Microsoft Azure presents a powerful cloud infrastructure offering for the modern transactional workload and analytical workload.
When it comes to transaction processing, Azure SQL Server on Windows price-performance without license mobility is 23% less expensive than the price-performance of AWS. With license mobility, that price-performance advantage widens to 27%, while adding a three-year commitment further widens the advantage for Azure to 31%
For analytic processing, Azure SQL Server on Windows without license mobility delivered price-performance that was 21% less expensive than AWS. Factor in license mobility, and Azure price-performance was 23% less expensive than AWS, while the addition of three-year pricing stretched the Azure advantage over AWS to 25%.
Keep in mind that tests are configured to get the best from each platform according to publicly documented best practices. Optimizations on both platforms would be possible as their offerings evolve or internal tests point to different configurations.
7. Disclaimer
Performance is important but it is only one criterion for a business-critical database platform selection. This test is a point-in-time check into specific performance. There are numerous other factors to consider in selection across factors of Administration, Integration, Workload Management, User Interface, Scalability, Vendor, Reliability, and numerous other criteria. It is also our experience that performance changes over time and is competitively different for different workloads. Also, a performance leader can hit up against the point of diminishing returns and viable contenders can quickly close the gap.
The benchmark setup was informed by the TPC Benchmark™ E (TPC-E) and the TPC Benchmark™ H (TPC-H) specification. The workloads were derived from TPC-E and TPC-H and are not official TPC benchmarks nor may the results be compared to official TPC-E or TPC-H publications.
GigaOm runs all of its performance tests to strict ethical standards. The results of the report are the objective results of the application of queries to the simulations described in the report. The report clearly defines the selected criteria and process used to establish the field test. The report also clearly states the data set sizes, the platforms, the queries, etc. used. The reader is left to determine for themselves how to qualify the information for their individual needs. The report does not make any claim regarding the third-party certification and presents the objective results received from the application of the process to the criteria as described in the report. The report strictly measures performance and does not purport to evaluate other factors that potential customers may find relevant when making a purchase decision.
This is a sponsored report. Microsoft chose the competitors, the test, and the Microsoft configuration. GigaOm chose the most compatible configurations for the other tested platform and ran the testing workloads. Choosing compatible configurations is subject to judgment. We have attempted to describe our decisions in this paper.
8. About Microsoft
Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.
Microsoft offers SQL Server on Azure. To learn more about Azure SQL Database visit https://azure.microsoft.com/en-us/services/sql-database/.
9. About William McKnight
William McKnight is a former Fortune 50 technology executive and database engineer. An Ernst & Young Entrepreneur of the Year finalist and frequent best practices judge, he helps enterprise clients with action plans, architectures, strategies, and technology tools to manage information.
Currently, William is an analyst for GigaOm Research who takes corporate information and turns it into a bottom-line-enhancing asset. He has worked with Dong Energy, France Telecom, Pfizer, Samba Bank, ScotiaBank, Teva Pharmaceuticals, and Verizon, among many others. William focuses on delivering business value and solving business problems utilizing proven approaches in information management.
10. About Jake Dolezal
Jake Dolezal is a contributing analyst at GigaOm. He has two decades of experience in the information management field, with expertise in analytics, data warehousing, master data management, data governance, business intelligence, statistics, data modeling and integration, and visualization. Jake has solved technical problems across a broad range of industries, including healthcare, education, government, manufacturing, engineering, hospitality, and restaurants. He has a doctorate in information management from Syracuse University.
11. About GigaOm
GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.
GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.
GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.
12. Copyright
© Knowingly, Inc. 2022 "SQL Transaction Processing and Analytic Performance Price-Performance Testing" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.