SQL Server Best Practices

  • Recurs1on (9/22/2012)


    The original intent was to avoid unnecessary table scans because of poorly written queries that use NOT EXISTS, NOT IN or != in cases where it is possible to re-write them using EXISTS, IN, or =.

    Why? So you can have a more complex query with the same or worse performance?

    How would you convert WHERE SomeValue != 0 into an equality? An in with every single other value in the table? I assure you, the IN won't perform better.

    The '!=, NOT IN/EXISTS causes table scans' is a myth. It's not the NOT that 'causes' table scans, it's selecting a huge portion of the table. Don't hobble your developers into writing extremely complex code to avoid a NOT, the extremely complex code likely won't perform any better.

    If your intent is to avoid unnecessary table scans because of poorly written queries, then someone needs to review query and execution plan to ensure that it is optimal and any scans are necessary.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I won't repeat what others have already said; I agree in particular with what Jeff wrote.

    Table Design

    3.Do not blindly normalize everything

    Risks being read as a license to be lazy about normalisation. I would suggest instead something like:

    Everything should be normalised to EKNF, and anything that isn't then in BCNF/4NF/5NF should be further normalised to thegreatest degree that would avoid destroying destroy the representation of functional dependencies property of the schema. From there denormalisation is permissable only with prior agreement of DBA (or senior tech management or however it works in your organisation).

    Indexes

    6.Make and use covering indexes where possible

    but only where they will not impair performance.

    3.Use table variable for small data sets and temp tables for larger ones

    3.Prefer temp tables over table variable unless there is a clear performance advantage

    Still not right. Prefer table variables where they have a functional advantage as well - this is sometimes rather more important that the performance advantage.

    The obvious functional advantage is that their state is not transactional. As a result it is often a good idea to write any diagnostic trace info you want to record to a table variable instead of to a permanent or temporary table, and copy it to the permanent log in a new transaction after a rollback in a catch block, since anything written to a temp table (or an ordinary table) would be destroyed by the rollback. Obviously this is more important during development and testing than for a released system, because you hope the released system does't contain bugs - but it probably will, since to err is human, and it's extremely nice and will impress your customers if you can diagnose and fix them quickly.

    And a missing point, which should be made to fit in somewhere: Pay close attention to error management, covering all three major facets of EM: reporting, containment and recovery; and make containment and recovery automatic wherever possible.

    Tom

  • GilaMonster (9/22/2012)


    If your intent is to avoid unnecessary table scans because of poorly written queries, then someone needs to review query and execution plan to ensure that it is optimal and any scans are necessary.

    OK. Thanks for the input. I'll get rid of it.

  • L' Eomot Inversé (9/22/2012)


    I won't repeat what others have already said; I agree in particular with what Jeff wrote.

    Table Design

    3.Do not blindly normalize everything

    Risks being read as a license to be lazy about normalisation. I would suggest instead something like:

    Everything should be normalised to EKNF, and anything that isn't then in BCNF/4NF/5NF should be further normalised to thegreatest degree that would avoid destroying destroy the representation of functional dependencies property of the schema. From there denormalisation is permissable only with prior agreement of DBA (or senior tech management or however it works in your organisation).

    The comments about this being abused are well taken. The reason that this was in here originally was that there is one area of our system in particular that was over normalized and is now difficult to change. It makes everything in that area more complex, and in one case, has even destroyed a data relationship that must be enforced by our code. Do you think it's worth trying to re-write this or just handle it by more vigilant reviews by senior staff.

    Indexes

    6.Make and use covering indexes where possible

    but only where they will not impair performance.

    Isn't this already covered by #8 under indexes?

    8.Do not make new or expand existing indexes to speed up a single query without considering the performance impact on the system as a whole

    Still not right. Prefer table variables where they have a functional advantage as well - this is sometimes rather more important that the performance advantage.

    Thanks. I added the function advantage part.

  • The points above are pretty comprehensive and from real world experience i know that most developers will pick a few favorite points from here and implement them. One additional point i always include in my recommendations is to get the code reviewed.

    A common problem i noticed with most .net developers coding in sql is that they ignore what sql is now capable of and continue to code like they are working on sql 2000. E.g try catch block are not used just like table data type or merge statements , sometimes partitioning is not implemented just because they do not understand it.

    Hopefully with the code begin reviewed they will appreciate some of these finer points and learn somthing new as they go along.

    Also you might want to implement PBA best practice advisor to make sure the DBA are also following soem best practices 😀

    Jayanth Kurup[/url]

  • Jayanth_Kurup (9/22/2012)


    The points above are pretty comprehensive and from real world experience i know that most developers will pick a few favorite points from here and implement them. One additional point i always include in my recommendations is to get the code reviewed.

    I definitely agree, but reviews of all .Net and SQL code is already mandatory before things can be checked into source control.

    Also you might want to implement PBA best practice advisor to make sure the DBA are also following soem best practices 😀

    That would be very helpful, but I'm actually not the DBA and don't have the ability to enforce that. I'm just the .Net Engineer on the team that knows the most about SQL, so this fell to me. Our system architect is working to create standards documents that can be used for both training new hires, and providing a standard that we can use in our code reviews. Whoever knows the most about a particular area got assigned to be point man for that section of the document.

  • Recurs1on (9/22/2012)


    Do you think it's worth trying to re-write this or just handle it by more vigilant reviews by senior staff.

    I'd say drop it and require that table designs be checked and approved. Someone who doesn't know DB design isn't going to understand what you mean by excessive normalisation, someone that does probably doesn't need to be told.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • 4.Avoid the use of dynamic SQL where possible

    5.Use sp_executesql instead EXEC('string') whenever writing dynamic SQL

    Color me paranoid: I would change these to "Data access will be performed exclusively using parameterized stored procedures unless specifically authorized by the DBA."

    Rich

  • That's not completely possible in our system right now. We have however, created views for all select access, and we do require that all inserts, updates, or deletes occur through parameterized stored procs.

  • Nils Gustav Stråbø (9/22/2012)


    Recurs1on (9/22/2012)We've had a lot of code written that just throws them in there without considering the huge performance hit you can take with the RBAR that is often associated with them.

    Instead of just saying that scalar functions usually should be avoided, show them why. A screenshot from Profiler with thousands of call to the scalar function for the query it is used in is worth a thousand words.

    Instead of redeveloping the wheel in this area, show them an article. The following article should do the job even if I do say so myself.

    http://www.sqlservercentral.com/articles/T-SQL/91724/

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Just thought i'd pimp my powershell script that can auto-detect quite a few of the points that have been discussed here.

    http://tsqlsmells.codeplex.com/[/url]

    Not saying that the 'code review' process can ever be fully automated but i think this has a place in there.



    Clear Sky SQL
    My Blog[/url]

  • Hi All,

    Just a quick comment.

    Choose the correct tool for the Job. A truck will crush nuts.. but if the end result is to eat the kernel, use a nut cracker instead.

    Our job, mine any way, is to teach which tool works best with which nut. These discussions are always a great learning opportunity but sometimes it just depends ...

    CodeOn

    😛

Viewing 12 posts - 16 through 26 (of 26 total)

You must be logged in to reply to this topic. Login to reply