SQL Server 2014 Checklist for Performance

  • Comments posted to this topic are about the item SQL Server 2014 Checklist for Performance

    Rudy

  • "Use table variables instead of temp tables"

    Why would this be a normal thing to do? Table variables only report 1 row for statistics, temp tables would be better for general coding in my opinion. Thoughts anybody?

  • Agreed, use temp tables and not table variables.

    Also, there are plenty of generalizations here, which may not apply to all cases.

  • Agreed, table variables can be terrible for performance. In fact, that's the main reason why multi-statement table-valued functions are often so bad (the last bullet point), because they use table variables to return the data.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Often is a key word

    Table variables can be much quicker but also if they get too big can bring the server to a halt.

  • Why would they be 'much quicker' than temp tables? The reduced logging isn't going to come into play until there's a lot of data, same with the not needing to update statistics.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I have other issues with this article.

    We have found that parameterised queries do not seem to work well. The optimiser creates an execution plan which is not relevant to particular values which are parsed.

    We have proved that dynamically generated SQL WITH THE VALUES will frequently outperform parameterised queries on very large tables where statistics are regularly updated.

  • "Use table variables instead of temp tables"

    What is "table variables" ?

    Does it mean that "DECLARE @temp TABLE()" is preferred over "CREATE TABLE #temp ()" ?

  • "Normalize the data as it benefits performance" - should have "(depends)" tag.

    The statment is not always true, since too many inner/left joins affect CPU usage.

    "Don't use NOLOCK" - should have "(depends)" tag too.

    It depends on database design. Personally, I use it a lot to increase SELECT performance.

  • m.kancleris (12/29/2014)


    "Don't use NOLOCK" - should have "(depends)" tag too.

    It depends on database design. Personally, I use it a lot to increase SELECT performance.

    NOLOCK is not a performance tuning feature. It's a way of telling SQL to please trade off integrity for concurrency. So more concurrency, less data accuracy. It can (and does) allow SQL to potentially return rows multiple times or to miss rows as it reads data.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (12/29/2014)


    m.kancleris (12/29/2014)


    "Don't use NOLOCK" - should have "(depends)" tag too.

    It depends on database design. Personally, I use it a lot to increase SELECT performance.

    NOLOCK is not a performance tuning feature. It's a way of telling SQL to please trade off integrity for concurrency. So more concurrency, less data accuracy. It can (and does) allow SQL to potentially return rows multiple times or to miss rows as it reads data.

    It does not return rows multiple times and does not miss any row (unless it has been deleted and rollbacked afterwards). It reads the data and includes uncommited changes.

    By having good database design it is very useful hint which minimizes the locks.

  • m.kancleris (12/29/2014)


    It does not return rows multiple times and does not miss any row

    Yes, it can.

    http://technet.microsoft.com/en-us/library/ms190805%28v=sql.105%29.aspx

    Missing one or more rows that were not the target of update

    When you are using READ UNCOMMITTED, if your query reads rows using an allocation order scan (using IAM pages), you might miss rows if another transaction is causing a page split. This cannot occur when you are using read committed because a table lock is held during a page split and does not happen if the table does not have a clustered index, because updates do not cause page splits.

    http://blogs.msdn.com/b/sqlcat/archive/2007/02/01/previously-committed-rows-might-be-missed-if-nolock-hint-is-used.aspx

    Here is a simple scenario that shows that the NOLOCK hint might miss some rows when concurrently executed updates (INSERTs in our case) are causing page splits. The INSERTs in script #2 will cause page splits because the table has a clustered key. First, you insert two rows, one with key value 1, and another with key value 10000. Then, you insert rows with key values 2, 9999, 3, 9998, etc. When the first page is full, it is split as described above. However, if the concurrently performed SELECT is using the NOLOCK hint on the same table, and the optimizer had chosen the allocation scan, the scan is moving only forward. Therefore, it might miss the ½ page of rows if the newly allocated page falls behind the scan.

    Similarly a page split can cause a scan to read rows twice, if the page is split 'behind' the scan point and half of the rows moved 'ahead' of the scan.

    Another example

    http://www.jasonstrate.com/2012/06/the-side-effect-of-nolock/

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (12/29/2014)


    m.kancleris (12/29/2014)


    It does not return rows multiple times and does not miss any row

    Yes, it can.

    http://technet.microsoft.com/en-us/library/ms190805%28v=sql.105%29.aspx

    Missing one or more rows that were not the target of update

    When you are using READ UNCOMMITTED, if your query reads rows using an allocation order scan (using IAM pages), you might miss rows if another transaction is causing a page split. This cannot occur when you are using read committed because a table lock is held during a page split and does not happen if the table does not have a clustered index, because updates do not cause page splits.

    http://blogs.msdn.com/b/sqlcat/archive/2007/02/01/previously-committed-rows-might-be-missed-if-nolock-hint-is-used.aspx

    Here is a simple scenario that shows that the NOLOCK hint might miss some rows when concurrently executed updates (INSERTs in our case) are causing page splits. The INSERTs in script #2 will cause page splits because the table has a clustered key. First, you insert two rows, one with key value 1, and another with key value 10000. Then, you insert rows with key values 2, 9999, 3, 9998, etc. When the first page is full, it is split as described above. However, if the concurrently performed SELECT is using the NOLOCK hint on the same table, and the optimizer had chosen the allocation scan, the scan is moving only forward. Therefore, it might miss the ½ page of rows if the newly allocated page falls behind the scan.

    Similarly a page split can cause a scan to read rows twice, if the page is split 'behind' the scan point and half of the rows moved 'ahead' of the scan.

    Another example

    http://www.jasonstrate.com/2012/06/the-side-effect-of-nolock/

    ok. 🙂

    0:1 you win 🙂

    Anyway, "index scan" is rare thing 🙂 on proper databases (primary key=identity()). I would not recommend to use NOLOCK for juniors too 🙂

  • m.kancleris (12/29/2014)


    Anyway, "index scan" is rare thing 🙂 on proper databases

    Is it? 😉

    It's surprisingly easy to have SQL tip to a table/index scan when it's using a non-covering index and it can happen as the data grows without the developer realising. An identity PK doesn't prevent this unless every single access to a table is a singleton seek via the PK, which is rather rare.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Much as I dislike generalised articles about setting up SQL Server, and I disagree with much of this article, it does provoke discussion and my fellow DBA and I had a few comments to share. So it's very bad to generalise, but we all do it, but it's good if it makes you think/review what you do.

    I might observe some of the previous discussion is a bit scary < grin >

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

Viewing 15 posts - 1 through 15 (of 50 total)

You must be logged in to reply to this topic. Login to reply