SQL Clone
SQLServerCentral is supported by Redgate
Log in  ::  Register  ::  Not logged in

I may need to re-think the use of table variables

By Ben Kubicek,

Today we have a guest editorial as Steve is out of the office.

I am not sure how many of you use table variables. I would presume many of you that have used them and got burned by them in the end. At first look, a table variable seems like a great way to use a sub set of data that you may need to join to or update in a SQL process. The initial testing goes well and everything seems to perform great. Then the code goes to production, the table variable ends up with way more data than you expected and the performance issues begin.

If you aren’t familiar with table variables, they are similar to temp tables, except you don’t need to create and drop them like temp tables. You use a declare statement to define the table variable. There are some downsides to table variables. You cannot create non-clustered indexes, or create constraints, or default column values on them. You also cannot create statistics on them. These detractions aside, they can still be useful in the right situation.

I don’t know how you are, but once I have had a bad experience with something I tend to shy away from it in the future. It has come to the point for me that, I won’t use a table variable unless I am extremely certain that the total number of rows in the table variable will only be a few hundred at most. Of course, this self-imposed limitation also limits their usefulness.

I recently came across an article by Brent Ozar, discussing how table variables have changed in SQL server 2019. If you didn’t get a chance to read it, I would recommend it, you can see it here

He discusses how the current cardinality estimation for table variables is way off. In his example he has over one million rows in the table variable, but the query plan thinks there is just one row. You can see how that might cause some performance issues. He then shows the same example in SQL Server 2019 and the row estimation still isn’t perfect, but it is much better and so is the performance.

So now I have a bit of a dilemma. After all of these years, having bad experiences with table variables, I may need to re-think using them in appropriate situations. Of course, there is still the issue of when will my company actually be up and running on SQL Server 2019, but that is a different discussion all together. At some point, I will be using SQL Server 2019 and table variables will be more or less fixed. I guess I will consider using them, it will just feel a bit strange, putting this tool back in the tool belt.

How about you? Have you been burnt by bad performance of table variables? When you get up on SQL Server 2019 will you consider using them again?

Total article views: 97 | Views in the last 30 days: 2
Related Articles

Creating Clustered Index on a Table Variable column in MS SQL Server

If you need to create an index in order to improve performance of the queries running against table ...


Discuss smart database backup in SQL Server 2017

So far, we’ve discussed several phases of backup that starts with planning, creating, strategizing a...


Create Variable Dynamically

Create Variable Dynamically in Stored Procedure


Link Server Performance optomization

this Post for discussion about ,find the best approach for performance optomization of link server


Performing Unattended Installs of SQL Server 2012

In my post here, I’ve discussed the procedure to perform attended installation of SQL Server 2012 on...