Laptop Displaying the GigaOm Research Portal

Get your Free GigaOm account today.

Access complimentary GigaOm content by signing up for a FREE GigaOm account today — or upgrade to premium for full access to the GigaOm research catalog. Join now and uncover what you’ve been missing!

Bringing in-memory transaction processing to the masses: an analysis of Microsoft SQL Server 2014 in-memory OLTP

Table of Contents

  1. Summary
  2. Introduction
  3. Primary approaches to in-memory databases
  4. Historical context and trends
  5. An in-memory DBMS is much more than a DBMS in memory
  6. Summary of design principles for Microsoft’s in-memory OLTP
  7. Competitive approaches
  8. SQL Server 2014 in-memory design principles and business benefits
  9. Key takeaways
  10. Appendix A: Why an in-memory database is much more than a database in memory
  11. Appendix B: Case studies
  12. About George Gilbert

1. Summary

The emerging class of enterprise applications that combine systems of record and systems of engagement has geometrically growing performance requirements. They have to support capturing more data per business transaction from ever-larger online user populations. These applications have many capabilities similar to consumer online services such as Facebook or LinkedIn, but they need to leverage the decades of enterprise investment in SQL-based technologies. At the same time as these new customer requirements have emerged, SQL database management system (DBMS) technology is going through the biggest change in decades. For the first time, there is enough inexpensive memory capacity on mainstream servers for SQL DBMSs to be optimized around the speed of in-memory data rather than the perf0rmance constraints of disk-based data. This new emphasis enables a new DBMS architecture.

This research report addresses two audiences.

  • The first is the IT business decision-maker who has a moderate familiarity with SQL DBMSs. For them, this explains how in-memory technology can leverage SQL database investments to deliver dramatic performance gains.
  • The second is the IT architect who understands the performance breakthroughs possible with in-memory technology. For them, this report explains the trade-offs that determine the different sweet spots of the various vendor approaches.

There are three key takeaways.

  • First, there is an emerging need for a data platform that supports a variety of workloads, such as online transaction processing (OLTP) and analytics at different performance and capacity points, so that traditional enterprises don’t need an internal software development department to build, test, and operate a multi-vendor solution.
  • Second, within their data platform, Microsoft’s SQL Server 2014 In-Memory OLTP not only leverages an in-memory technology but also takes advantage of the ability to scale out to 64 virtual CPU processor cores to deliver a 10- to 30-times gain in throughput without requiring the challenge of partitioning data across a cluster of servers.
  • Third, Oracle and IBM can scale to very high OLTP performance and capacity points, but they require a second, complementary DBMS to deliver in-memory technology. SAP’s HANA is attempting to deliver a single DBMS that supports a full range of analytic and OLTP workloads with the industry closely watching how well they optimize performance. NewSQL vendors VoltDB and MemSQL are ideal for greenfield online applications that demand elastic scalability and automatic partitioning of data.