Even More SQL Server Features that Time Forgot

SQL Server works well, and Microsoft does everything it can to keep it relevant and competitive: As with everything in real life, it doesn't don't always get it completely right, and Rob Sheldon continues his quest through the jungle of past features to rediscover and explore the ones that time forgot. Here, he comes across Lightweight Pooling, XML Indexes, Stretch Databases, SQL Variants, Transaction Savepoints and In-Memory OLTP.

The series so far:

  1. The SQL Server Features that Time Forgot: Data Quality Services, Master Data Services, Policy-Based Management, Management Data Warehouse and Service Broker
  2. More SQL Server Features that Time Forgot: Auto-shrink, Buffer-pool extension, Database Diagrams, Database Engine Tuning Advisor, and SQL CLR
  3. Even more SQL Server Features that Time forgot: In-Memory OLTP, lightweight pooling, the sql_variant data type, stretch databases, transaction savepoints, and XML indexes
  4. Still More SQL Server Features that Time Forgot: Active Directory Helper Service, Data Transformation Services, DBCC commands, English Query, Native XML Web Services, Northwind and pubs databases, Notification Services, SQL Mail, SQL Server Distributed Management Objects, Surface Area Configuration Tool, utilities, and Web Assistant

Few SQL Server features escape the scrutiny of those developers and administrators immersed in the product every day. Whenever a new feature is introduced, they usually have strong opinions about its capabilities and worth, and are often happy to share them. That’s not to say those opinions are always negative. In fact, they usually run the full gamut, with some users expressing both satisfaction and displeasure at the same time. Whether Microsoft is listening is an entirely different matter.

In the first two articles in this series, we covered a number of SQL Server features that, for one reason or another, have tended to generate strong reactions, ranging from elation to abject frustration. In this article, we continue that discussion, touching upon more features that have stirred up controversy in one form or another. You might have had you own experiences with these features—and have likely formulated your own opinions—in which case this article might offer a few additional perspectives to consider, or at least provide some distraction.

In-memory OLTP

Microsoft introduced in-memory OLTP (online transaction processing) in SQL Server 2014. Since then, the company has been evangelizing its in-memory capabilities as the new age of transaction processing, with promises of performance like the world has never seen.

In-memory OLTP is integrated into the database engine and provides in-memory processing capabilities for memory-optimized tables, natively compiled stored procedures, and natively compiled user-defined scalar functions. (Microsoft added support for scalar functions in SQL Server 2016.)

When you implement a memory-optimized table, the entire table resides in memory. Applications accessing the data read from and write to memory, interacting with the table just like a disk-based table. If you enable durability on a memory-optimized table, the database engine maintains a copy of the table on disk.

Natively compiled stored procedures and user-defined functions can be used in conjunction with memory-optimized tables to improve performance even further. The modules are compiled to native code upon creation, rather than when first executed, as is the case with traditional stored procedures and user-defined functions. Natively compiled modules can reference only memory-optimized objects.

When the in-memory OLTP features were first introduced, they came with so many limitations that many database developers lost interest immediately. They could not use outer joins or operators such as UNION or DISTINCT. They could not enable Transparent Data Encryption (TDE) on the database files. They could not alter the tables after creating them. In fact, they could not do a great many things, way too many to go into here.

SQL Server 2016 addressed many of these limitations, but the in-memory OLTP features still came with their fair share of restrictions. For example, you cannot target a memory-optimized table with a MERGE INTO statement, create a snapshot of a database that includes a memory-optimized filegroup, or create a contained database that includes in-memory OLTP objects.

Despite these limitations, some database teams have successfully implemented in-memory OLTP and are happy with the results, often using its capabilities to improve temporary table performance or deal with tempdb contention issues. Regardless of how you implement in-memory OLTP, you’ll likely need to use natively-compiled modules in conjunction with the memory-optimized tables to get the best performance, which means you might have to update a lot of code.

For many database developers and administrators, the in-memory technology is still too new for them to feel confident implementing it into production, preferring to take a wait-and-see approach. Some also believe that a properly optimized system eliminates the need for in-memory OLTP. That said, Microsoft seems committed to its in-memory offerings and is likely to continue to improve them with each SQL Server release, as we’re seeing with SQL Server 2017.

Lightweight pooling

Introduced in SQL Server 2000, the lightweight pooling option lets you configure a server instance to run in fiber mode, rather than the default thread mode. Fiber mode reduces excess context-switching in symmetric multiprocessing (SMP) environments, which can help address performance issues related to the User Mode Scheduler (UMS). When you enable lightweight pooling, the data engine utilizes Windows fibers rather than threads to process the UMS workers.

Lightweight pooling might sound like a credible solution for addressing performance issues, especially with a name that includes lightweight, but fiber mode comes with a number of concerns. One of the biggest is that it can actually degrade performance because it eliminates many of the benefits that come with context switching. Even Microsoft admits that “fiber mode seldom improves performance or scalability on the typical system.”

Worse still, fiber mode can break some SQL Server components, especially thread-related features such as Thread Local Storage (TLS) or thread-owned objects (i.e., mutexes). In addition, common language runtime (CLR) execution is not supported when fiber mode is enabled. For a good rundown on many of the issues that come with lightweight pooling, check out Ken Henderson’s article The Perils of Fiber Mode. It might have been written back in 2005, but it’s still full of valuable information.

It’s also worth noting that Windows Server 2003 introduced thread pool enhancements that greatly improved thread scheduling and context switching, making lightweight pooling even less relevant. Chances are, you’ll never need to implement fiber mode, but if you’re considering it, make certain you exhaust all other optimization routes first. If you don’t believe me, take your cue from Microsoft: “We recommend that you do not use fiber mode scheduling for routine operations…” It doesn’t get much clearer than that.

sql_variant data type

SQL Server offers a catch-all data type called sql_variant, which you can assign to columns, variables, parameters, or values returned by user-defined functions. A sql_variant value can hold up to 8,000 bytes and include most any type of data. For example, a sql_variant column might contain integers, decimals, strings, dates, or other types of data.

In specific situations, the sql_variant data type can be a useful tool when working with inconsistent or unspecified data types. For instance, the data type might be a good fit for a user-defined function that can return different types of data. I’ve seen other suggestions as well, such as using the data type for columns that support user-defined fields or within the ELSE clause of a CASE expression to handle unexpected values.

That said, the sql_variant data type should be assigned with great caution. A relational database management system such as SQL Server is most effective when data architects and developers know exactly the type of data they’re working with—and they build that knowledge into their designs. Typing the data is one of the first and most important steps for ensuring the data’s integrity. If you find yourself reverting to the sql_variant data type with any regularly, it’s time to step back and examine your application and database designs.

That’s not to say you should never use the sql_variant data type. Just be sure you do so with great care. It is far better to think of it as a last resort than your first line of defense. Be aware, however, that there are some limitations with the data type, and you have to be careful when using them in comparisons. A good place to start in better understanding the sql_variant data type is with the SQL Server help topic Using sql_variant Data.

Stretch databases

Microsoft introduced the stretch database in SQL Server 2016 to allow database teams to extend tables in their on-premises databases to the Azure platform, where storage space is considered to be relatively cheap. The feature automates the process of archiving historical data to the cloud, where it can still be accessed, but not updated or deleted. This can be useful for storing large sets of historical data off-site while minimizing in-house resource needs, yet still meet compliance and regulatory requirements.

As good as all this might sound, the stretch database comes with many limitations, so many, in fact, that few database teams are likely to implement this feature in production any time soon. As already noted, you cannot update or delete the migrated data (which in itself is not a bad idea for archived data), but you also cannot update or delete any data that is eligible for migration, even though it’s still sitting in your on-premises database.

But even that would not be so bad if it were the only issue you had to contend with. Unfortunately, this is not the case. For example, you cannot create an index on a view that references a stretch-enabled table, insert rows into a stretch-enabled table on a linked server, or propagate the filters you create on your indexes to the Azure environment.

You’re also limited when it comes to the stretch-enabled table itself. The table cannot be a memory-optimized table or contain filestream data or participate in replication, change tracking, or change data capture. Computed columns are also forbidden, as are default constraints and check constraints. You can also forget about full-text indexes, spatial indexes, and XML indexes. There are plenty of other constraints as well, but no sense going into every gory detail.

Despite these limitations, many in the industry like the idea of the stretch database and are hopeful that one day Microsoft will make it a more usable feature. In the meantime, database teams that can work within the current limitations might want to check out the Stretch Database Advisor included with the SQL Server 2016 Upgrade Advisor (which appears to have been renamed the Data Migration Assistant). The Stretch Database Advisor can help you identify databases and tables that are potential stretch candidates as well as determine what might be preventing those that are not.

Transaction savepoints

Savepoints provide a way to roll back a portion of a transaction, without having to roll back the entire transaction. You create a save point at a specific place in your transaction and then use a ROLLBACK TRANSACTION statement to roll back the transaction to that point, calling the savepoint by name.

Microsoft warns that you should use savepoints only if errors are unlikely and if the savepoints are more efficient than other alternatives, such as rolling back the entire transaction or pre-verifying specific data. SQL Server documentation includes an example of creating a savepoint in its help topic Transaction Savepoints. The example is fairly weak, but at least it demonstrates how easy it is to implement savepoints.

It’s worth noting that SQL Server allows you to create savepoints with duplicate names within a transaction. This can cause unexpected results in a transaction, such as when you’re calling one stored procedure within another. If duplicate names exist, the ROLLBACK TRANSACTION statement will roll back only to the most recent instance of the name and disregard the rest.

You’ll find plenty of database developers out there who view savepoints as nothing more than transactional warts, their use a vain attempt at nested or autonomous transactions. You’ll also find savepoint proponents who enthusiastically extol their virtues, treating them as god-like tools for managing transactions. What we don’t know is how many database teams are actually using savepoints in their production environments. Whoever they are, they no doubt have an interesting story to tell about their use.

XML indexes

Microsoft introduced the xml data type in SQL Server 2005. Since then, the data type and its related features have received a lukewarm reception, with reactions across the board. Even so, for many database teams, the XML capabilities have served their purpose, and the teams have been relatively satisfied with the results. The same can’t be said for XML indexes. In fact, they draw such fierce criticisms in some quarters that the only good thing you can say about them is that they rank up there with all those other SQL Server components that everyone loves to hate.

SQL Server supports two types of XML indexes: primary and secondary. The primary ones index every tag, value, and path within every XML document in a column configured with the xml data type. The secondary ones are specific to the paths, values, or properties, but still index every document. For each xml column, you can create one primary XML index and multiple secondary indexes. However, you must create the primary one before you can create the secondary ones.

XML indexes can grow quite large and are costly to update. They also have a reputation for degrading query performance, a side-effect that most DBAs could do without. Many database teams, when faced with the performance uncertainties and excessive bloat, forego XML indexes altogether.

But it’s not all bad news. For certain types of queries against an xml column, indexes might actually improve performance, but you must be able to ensure that the bulk of your queries are of a specific type and are predictable enough to warrant the index, which usually entails a fair amount of trial-and-error.

To address some of the pain-points of XML indexes, Microsoft introduced selective XML indexes in SQL Server 2012. Selective indexes let you target a subset of nodes in your XML documents, which can lead to better performance and more efficient storage. As with the original XML indexes, you can create one primary selective index and multiple secondary ones. However, even with these, you want to be sure that your indexes are supporting very specific and predictable queries. Once again, there will likely be a lot of trial-and-error, but in some cases, it might be worth the effort.

Plenty more where that came from

Given the number of moving parts in SQL Server, it’s no surprise that some of them run more smoothly than others, yet even the more controversial of those parts will likely benefit someone. For example, if you can work within the limitations of stretch databases, you might be able to take advantage of the extended Azure storage, or if you need to query XML data and can predict exactly what those queries will look like, a little trial-and-error might yield a selective XML index that can actually improve performance.

SQL Server is an evolving product, and now that SQL Server 2017 RTM is available, we can expect more changes and more features. With any luck, we’ll also see improvements in the components we’ve covered in this series. I’ve already come across discussions about enhancements to in-memory OLTP, the Database Tuning Advisor, and Master Data Services (MDS). No doubt there have been other improvements as well, any of which are bound to make somebody out there happy.