SQLServerCentral Editorial

Solving Tempdb Issues

,

While reading Remus Rusanu's piece on SQL Server 2016 development, there was an item that struck me. There's a part of the piece where he notes that an engineer at Microsoft realized that there was a tempdb issue with Azure SQL Database. To solve it, a failover was needed, which could be problematic. The basic issue was that tempdb files were sized unequally, discovered after "months of investigation."

Now, on one hand this seems silly. After all, we've known since SQL Server 2005 that the guidance was for all files to be sized the same. Shouldn't engineers at Microsoft be following all the practices  known for optimizing SQL Server performance? I think all Microsoft people should follow this, especially those working on other products (*cough* Sharepoint *cough*), but at the very least SQL Server engineers should have a huge list, perhaps with PBM or unit tests, that warn about non-compliance.

On the other hand, since we've known this is an issue since SQL Server 2005, why does SQL Server allow this? I'd think some simple tooling and a few checks in code could eliminate this as an issue in any new install. Catch the issue during upgrades and grow (or shrink) files to be the same size, or at least warn the administrator. I know there are situations where you need to add a new file in an emergency, but shouldn't this be an item we push administrators to correct quickly after the emergency is past? Or at least force them to consciously make an exception.

There are plenty of limitations and restrictions in SQL Server systems that Microsoft forces on us. We have limits on where new features can be used, various historical limits on function parameters, and more. Why not also just enforce limits that prevent issues like this? I'm sure people will complain, but wouldn't those complaints be outweighed by less issues overall from all customers?

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating