trace flag 610

  • Anyone tried turning ON, trace flag 610 while AG is enabled? It works?

  • adding some background, because i didn't know what 610 did, let along as it pertains to availability groups

    http://sqlblogcasts.com/blogs/simons/archive/2010/11/17/trace-flag-610-when-should-you-use-it.aspx


    Trace Flag 610 – When should you use it?

    Thanks to Marcel van der Holst for providing this great information on the use of Trace Flag 610. This trace flag can be used to have minimal logging into a b tree (i.e. clustered table or an index on a heap) that already has data. It is a trace flag because in testing they found some scenarios where it didn’t perform as well. Marcel explains why below.

    TF610 can be used to get minimal logging in a non-empty B-Tree. The idea is that when you insert a large amount of data, you don't want to create a lot of transaction log. Initially the idea was to automatically do this in the engine, but we ran into a bunch of issues and thus we put it under a traceflag.

    Here are some of the things to be aware of:

    1) When the transaction commits, it needs to write all pages that were minimally logged to disk. Even though pages will get written asynchronously when they are full, it is possible that not all pages can be written in time in case of a slow I/O subsystem. This can make the operation slower than full logging, as for full logging, SQL only needs to write the commit log record and then it is done.

    2) There is a bug in SQL2008 where the transaction log grows very big (due to log reservation), even though the log is never used. This is due to how rollback works for minimally logged pages. This bug was fixed in SQL2008R2 and is fixed in one of the CUs for SQL2008 (I forgot which one)

    3) If you have a large bufferpool (hundreds of GBs or more) and the I/O subsystem cannot keep up, the commit could take very long as well.

    4) Don't make transactions to big. Even though the features is build to be used when inserting a large number of rows, it does not work very well if you make the transactions too big. For instance, inserting about 10000 rows at a time (about 3-5 GB max). We had one customers inserting 1TB in a single transaction, which caused some issues.

    5) The feature is NOT designed to make inserts go faster. It is mainly written to reduce the size of the transaction log. There are cases where perf might be better, but there are cases as well, where it is slower as well.

    My recommendation is to test this feature in a test environment that has similar I/O characteristics than the production system and see if it helps.

    I would only use it if you have fast enough I/O to keep up with creation of the minimally logged pages.

    However, in the cases where you have a fast enough I/O subsystem, it is a great way to reduce the size of the transaction log.

    Published 17 November 2010 00:16 by simonsabin

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

Viewing 2 posts - 1 through 1 (of 1 total)

You must be logged in to reply to this topic. Login to reply