How SQL Server Can Just Go Faster

  • Kathi Kellenberger

    SSC Veteran

    Points: 225

    Comments posted to this topic are about the item How SQL Server Can Just Go Faster

  • Jeff Moden

    SSC Guru

    Points: 995161

    While I agree that there have been some pretty cool successful attempts at making things "better" which, as you say, means "go faster", in this case,  all is not without some good bit of cost that a lot of people may not be aware of.  We've not yet made the jump past 2016 but the changes there (during the jump from 2012 to 2016) have cost me quite a bit.  I have to cringe as if I were testing a bomb that goes off on impact with a sledge hammer every time we do an update.

    For example,

    Automatic Forced Inclusion of Trace Flag 1117 Functionality on TempDB

    MS still screwed many of us by implementing the equivalent of TF 11117 on TempDB without ever being able to temporarily disable it during operations that may require massive growth of just one file on a limited size drive.  I found this out the hard-way when trying to do a minimally logged INSERT/SELECT of a large table that caused such a thing to happen in TempDB.  The task would have been successful if only one of the TempDB files grew to the required size but, instead, they all did and it ran TempDB out of disk space.

    Forced Sorting in TempDB During What Should Be "Minimal Logging"

    What was the task where it would have been beneficial to have only one TempDB file grow?  That was from some other ill-begotten attempt somewhere previously where if you wanted to SET IDENTITY INSERT ON, things like massive INSERT/SELECTs, whether correctly setup for minimal logging nor not, would produce a sort of ALL rows of the Clustered Index (or any index keyed on IDENTITY column) to support the correct ordering for the minimal insert even though the source data is in the correct order and does  not need to be sorted, which is advertised as a feature of doing minimal logging correctly.

    MS still hasn't solved the problem of the massive sorts required when using SET IDENTITY INSERT ON, which continues to screw up a lot of what I need to do on a regular basis.

    Automatic Forced Fast Inserts During Bulk Loading

    MS screwed everyone (and most don't know it yet) when they made it so that things like "INSERT BULK" (not to be confused with BULK INSERT, which is also affected but produces less of an effect due to the number of rows usually involved) automatically reserves up extents without checking k if available partially full extents are available as a part of the "Go Faster"  program even if just one row is inserted.  It turns out that you can overcome that silly mistaken by enabling or disabling TraceFlag 692 at the session level but that means that you either have to modify all singleton insert (specifically, code that uses "Insert Bulk", which is present throughout a lot of code (e.g. WhatsUp Gold, etc) or modify all large batch code that does bulk insert (e.g. Bulk Insert, BCP, etc) because it is NOT a database-level setting.

    For me, the end result was an 8GB database that should have required no maintenance exploding to 55GB with 47GB of totally unused disk and memory space in the form of extents that contained just one row or a low number of rows that would otherwise require just a part of 1 page.  I had to enable TF-692 at the global level to prevent this because we can't actually change the managed code for 3rd party software.

    Here's the listing of what Trace Flag 692 does.  Notice what I've highlighted in Red.

    Disables fast inserts while bulk loading data into heap or clustered index. Starting SQL Server 2016 (13.x), fast inserts is enabled by default leveraging minimal logging when database is in simple or bulk logged recovery model to optimize insert performance for records inserted into new pages. With fast inserts, each bulk load batch acquires new extent(s) bypassing the allocation lookup for existing extent with available free space to optimize insert performance.

    With fast inserts, bulk loads with small batch sizes can lead to increased unused space consumed by objects hence it is recommended to use large batchsize for each batch to fill the extent completely. If increasing batchsize is not feasible, this trace flag can help reduce unused space reserved at the expense of performance.

    Note: This trace flag applies to SQL Server 2016 (13.x) RTM and higher builds.

    Scope: global or session

    https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15

    The New "Improved" Cardinality Estimator

    And then there's the changes that MS made to the cardinality estimator, which crushed performance on a lot of seriously significant code we had and we (thankfully) could over-ride to use the original.

    And yet, even with all so-called "improvements", they won't resolve a simple 11 year old request for a Tally Function that operates at machine language speeds or PIVOT function that works at least as well as in does in ACCESS.

    It's Not Just Paranoia...

    I could wax on for much longer because the list of such "improvements" that have caused such pains is somewhat large and the pains are sometimes hidden or deep ("It just runs faster", my aching arse) but won't.  The supposed "performance improvements" that MS has made have cost me a shedload of problems, have wasted a huge amount of time to discover what the problems are (Ok... what the hell is going on here?) and are caused by, and have made some functionality of SQL Server simply go away (killer for me with the size data I have to work with and at only 2TB for one of my tables, I consider to be quite small nowadays).

    Hopefully, the promises of the new functionality that you've listed in the article and a lot of the stuff that you haven't doesn't come at the same dear and significant and sometimes non override-able costs that previous "improvements" have.

    Ah... but there's more, isn't there?  Remember the regressions and problems built into 2012 and 2014?  Remember things like an online rebuild of a Clustered Index corrupting said index and the raft of other problems they built in as supposed "improvements"?

    We're upgrading from 2016 to 2019 sometime in the first half of next year.  The good part is, it will also include a migration to new hardware and so I can have 2016 servers waiting in the wings if all hell breaks loose and the "improvements" come at too high a cost.

    All that being said, I'm scared to death of moving from 2016 to 2019 next year and MS has done nothing to quell the ongoing fears associated with their supposed "improvements".  In fact, they continue to expand them.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
    "If you think its expensive to hire a professional to do the job, wait until you hire an amateur."--Red Adair
    "Change is inevitable... change for the better is not."
    When you put the right degree of spin on it, the number 3|8 is also a glyph that describes the nature of a DBAs job. 😉

    Helpful Links:
    How to post code problems
    Create a Tally Function (fnTally)

  • David Burrows

    SSC Guru

    Points: 64592

    Jeff Moden wrote:

    All that being said, I'm scared to death of moving from 2016 to 2019 next year and MS has done nothing to quell the ongoing fears associated with their supposed "improvements".  In fact, they continue to expand them. 

    And with 2017 then 2019, will there be a 2020? what about the lifecycle for 2016? and the move away from SP (traditionally used for lifecycle)

    Far away is close at hand in the images of elsewhere.
    Anon.

Viewing 3 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply