Blog Post

Why you should think about upgrading to SQL Server 2016!

,

SQL Server 2005 is out of support and you might be (Or rather, you should be) planning an enterprise wide upgrade of SQL Server. Are you thinking of upgrading to SQL Server 2014? If yes, then wait! SQL Server 2016 is packed with great features and mainly, there is plenty of enhancements. Here is a quick walk-through of my 6 favorite features, and I will explain why SQL Server 2016 is a great choice for your next platform upgrade(When it’s released(RTM)):

1. In Memory OLTP Improvements

As you already know In Memory OLTP, also known as ‘Hekaton’ and ‘In-Memory Optimization’, is Microsoft’s latest in-memory processing technology. In-Memory OLTP is optimized for Online Transaction Processing (OLTP). It is integrated into SQL Server’s Database Engine and can be used in the exact same manner as any other Database Engine component.

In-Memory OLTP originally shipped with SQL Server 2014 and it mainly features two new data structures which are Memory-Optimized Tables, and Natively-Compiled Stored Procedures.

SQL Server 2016 provides an improved In Memory OLTP engine, and it overrides many restrictions which existed in SQL 2014(Phew! I was waiting for this to happen).

In Memory OLTP SQL 2016 Improvements

Feature/LimitSQL Server 2014SQL Server 2016
Maximum size of durable table256 GB2 TB
LOB (varbinary(max), [n]varchar(max))Not supportedSupported
Transparent Data Encryption (TDE)Not supportedSupported
ALTER PROCEDURE / sp_recompileNot supportedSupported (fully online)
ALTER TABLENot supported

(DROP / re-CREATE)

Partially supported*
DML triggersNot supportedPartially supported

(AFTER, natively compiled)

Feature/LimitSQL Server 2014SQL Server 2016
Indexes on NULLable columnsNot supportedSupported
Foreign KeysNot supportedSupported
Check/Unique ConstraintsNot supportedSupported
ParallelismNot supportedSupported
OUTER JOIN, OR, NOT, UNION [ALL], DISTINCT, EXISTS, INNot supportedSupported
SSMS Table DesignerNot supportedSupported

The above 2 tables should help you understand why SQL Server 2016 should be the platform of your choice if you are planning to deploy In Memory OLTP for your critical workloads.  I would still say In Memory should be your last resort to fix issues like latch contention or heavy blocking as there are other techniques like Hash partitioning with a computed column(to reduce contention) or use Read Committed Snapshot Isolation (RCSI)(to avoid blocking). There are other overheads associated with these two, and you should evaluate all the available options and pick the right solution.

2. Column Store Indexes

SQL Server 2016 introduces real-time operational analytics, the ability to run both analytics and OLTP workloads on the same database tables at the same time (Yes, to an extend and we are not talking about replacing SSAS cubes).

Benefits of Column Store Indexes

A columnstore index can provide a very high level of data compression, typically 10x, to reduce your data warehouse storage cost significantly. Plus, for analytics they offer an order of magnitude better performance than a btree index. They are the preferred data storage format for data warehousing and analytics workloads. Starting with SQL Server 2016 ,you can use columnstore indexes for real-time analytics on your operational workload.

Recommended use cases:

  • Use a clustered columnstore index to store fact tables and large dimension tables for data warehousing workloads. This improves query performance and data compression by up to 10x.
  • Use a nonclustered columnstore index to perform analysis in real-time on a OLTP workload.

You might have many analytics queries today which you will never run directly against a OLTP database mainly because it might impact the OLTP workload performance, but with Operational Analytics you have an opportunity to test and ensure that those analytics queries can indeed be run against a OLTP database or even consider leveraging Always On AG secondary replicas powered by nonclustered columnstore to offload your analytics workloads .

3. Enhanced AlwaysOn Availability Groups

The AlwaysOn Availability Groups feature is a high-availability and disaster-recovery solution that provides an enterprise-level alternative to database mirroring. Introduced in SQL Server 2012, AlwaysOn Availability Groups maximizes the availability of a set of user databases for an enterprise. An availability group supports a failover environment for a discrete set of user databases, known as availability databases, that fail over together. An availability group supports a set of read-write primary databases and one to eight sets of corresponding secondary databases. Optionally, secondary databases can be made available for read-only access and/or some backup operations.

AlwaysOn AG SQL 2016 Improvements

Failover on database health
Distributed Transaction Coordinator support
3 Synchronous replicas
Optimized log transport
Load balancing across readable secondary replicas

The first enhancement is self-explanatory,now a failover can be triggered according to the state of the database health. That can come handy in some situations.

DTC support is promising. In the past I have worked with some 3rd party apps which relied heavily on DTC and it’s good to see that SQL 2016 is supporting it.

Optimized log transport is my favorite enhancement. Normally on a high concurrent OLTP system, there is always a chance for the secondaries to stay behind and impact the RTO/RPO service level agreements. With optimum log transport and parallel redo threads, this overhead will be reduced.

4. Native JSON Support

SQL Server 2016 natively supports JSON. With native support you have the ability to format and export data as JSON string. You also have the ability to load JSON text in tables, extract values from JSON text, index properties in JSON text stored in columns, etc.

First thing we should be aware is that built-in JSON support is not the same as the native JSON type. In SQL Server 2016, JSON will be represented as NVARCHAR type.

JSON Use cases

  • You can accept JSON, easily parse and store it as relational
  • You can export relational easily as JSON
  • You can correlate relational and non-relational

5. Always Encrypted

Always Encrypted is a feature designed to protect sensitive data, such as credit card numbers or national identification numbers (e.g. U.S. social security numbers), stored in SQL Server databases. Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server). As a result, Always Encrypted provides a separation between those who own the data (and can view it) and those who manage the data (but should have no access).

Always Encrypted makes encryption transparent to applications. An Always Encrypted-enabled driver installed on the client computer achieves this by automatically encrypting and decrypting sensitive data in the client application. The driver encrypts the data in sensitive columns before passing the data to the Database Engine, and automatically rewrites queries so that the semantics to the application are preserved. Similarly, the driver transparently decrypts data, stored in encrypted database columns, contained in query results.

Data is vulnerable when it’s in rest and in motion. Features like TDE(Transparent Data Encryption) protects the data which is at rest, but till SQL 2016, there was no way to protect the data when it’s in motion. Attacks like man in the middle attack can compromise the data which is in motion, however with SQL 2016 and Always Encrypted we now have a solution to this problem.

6. Temporal tables

Temporal tables, also named system-versioned tables, allow SQL Server to automatically keep history of the data in the table. Temporal tables were introduced in the ANSI SQL 2011 standard and is now available in SQL Server 2016.

A system-versioned table allows you to query updated and deleted data, while a normal table can only return the current data. For example, if you update a column value from 5 to 10, you can only retrieve the value 10 in a normal table. A temporal table also allows you to retrieve the old value 5. This is accomplished by keeping a history table. This history table stores the old data together with a start and end data to indicate when the record was active.

A classic use case for temporal table is for Audit purposes. With temporal tables you can find out what values a specific entity has had over its entire lifetime.

Another use case is a quick recovery when you or someone deletes a row from a table (Yes, the same old delete without a clause!). With temporal tables, you can retrieve the deleted row details from the history table and insert it back into the main table.

So, you might be curious about this. What is the difference between Temporal tables and CDC (Change Data Capture). Here is the explanation:

CDC is usually useful to keep changes for a short period of time to feed those to external consumers such as an ETL process.Temporal tables can keep historic data for very long periods. Usually used for auditing/legal purposes and time travel queries.

Conclusion:

These 6 features should convince you on why you should consider upgrading to SQL 2016 when it’s released. There are many other exciting features like Polybase, Completely overhauled SSRS etc and we will talk more on that during the upcoming posts.

Thanks for reading and keep watching this space for more!

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating