SQLServerCentral Editorial

Actively Choosing Compatibility

,

SQL Server databases have had a compatibility level for a long time. This is a setting that enables the database to process code as if it were a particular version. The levels go from 80 (SQL Server 2000) to 160 (SQL Server 2022). Each time (almost) a new version of SQL Server has been released, there has been a new compatibility level.

However, not all versions can support all compatibility levels. For example, my SQL 2022 instance can support levels back to 100 (SQL Server 2008). If I wanted to get a database to act like a version older than 2008, for example, 2005, I would need to install a SQL Server 2012 or older instance. There is a table of engine versions and supported compatibility levels on the ALTER DATABASE Docs page.

When you create a new database, by default, it is at the current compatibility level. However, if you upgrade a database, the level might not change. There are some limits to which versions are supported, so an upgrade might change your database.

An administrator might choose to keep an older level for compatibility purposes. Perhaps your code has an identifier that is now a keyword. Perhaps you expect some code to behave a certain way. However, not all changes are protected by the compatibility level. Most of the time, an administrator must manually change this, which is something that can slip through the cracks. If you don't change this right away, likely it isn't going to change.

There was an interesting post from Brent Ozar recently that explained a bit about compatibility levels in a SQL Server database. I wonder how many people actually actively choose a level or they just accept the default level for that instance. Brent gives some advice in the post, and his recommendations vary a bit, depending on whether you are happy with the system or not. He also recommends measuring your system and then evaluating a change. Especially if this is a database for vendor software.

I don't think changing or updating this setting is a priority, but I also think that being aware of when your level doesn't match the instance and documenting this is important. At some point, through many upgrades, you might find your level isn't supported any longer. Then your database might have immediate issues. A good monitoring system can let you know when you have mismatches that can be evaluated when there is extra time to clean up or test changes to the system.

Each version of SQL Server adds new features, like the changes for Intelligent Query Processing. In general, we want to take advantage of these if we can. However, not all workloads respond positively, so as Brent mentions, you need to test and evaluate your workload. Hopefully, you have a clear "things are better" or "things are worse" when changing levels. When you get some queries that perform better and some worse, then you have some choices to make. Often the default is "do nothing," which may or may not be the best decision, but the devil you know is sometimes easier to deal with than the one you don't know.

That's fine but consciously make that choice. Keep an eye on your system and don't just accept defaults, whether those are from Microsoft or the ones you've left after an upgrade. Actively manage your systems to get the best performance you can for clients.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating