• Jeff Moden (9/21/2012)


    The following isn't a bad suggestion but, unless you ccome up with a reassonable standard snippet of code for use in the CATCH block, you're going to have an awful lot of errors that no one can determine the origin of nor even what the error actually is.

    8 USE TRY...CATCH instead of @@ERROR for error handling

    We do have one, and we're pretty good about using it. The other thing I forgot to mention is that this document will also be required reading for all new hires, so it's mostly in there for that purpose.

    This is a great idea but has somme nasty caveates depending on which ORM you may be using. For example, if you create a VARCHAR(10) column and pass filters to queries through Linq2Sql, Linq2Sql will convert the filter to NVARCHAR(10) in the code it executes. Since NVARCHAR has a high data precedence than VARCHAR, the column being filtered will first convert to NVARCHAR which destroys any hope of getting an INDEX SEEK out of the code. Depending on how they might be used, I also have a personal dislike for BIT columns. For example, it's sometime much more expedient to SUM a flag column to get a count of rows where the flag is set and that's just not possible with the BIT datatype. I've also see certain types of columns that really should be TINYINT for future expansion (think "bullet proofing" code) that some might not consider at design time.

    1. Use the smallest data type appropriate to the data you're storing

    I didn't know that. We don't use an ORM, but that's some good advice to keep in mind if we ever decide to start. One of our biggest problems right now is the business has mandated we keep data "forever". We all know that eventually, that's not going to work, but right now, the difference between a tinyint and a smallint on some tables is 100's of MBS.

    The following two suggestions seem to form a paradox. I'll also just about guarantee that Item 3 will be severely abused.

    2. Keep tables narrow

    3. Do not blindly normalize everything

    They are kind of a paradox. There's often tension between normalizing data and performance. What I'm trying to get at here is that both need to be considered. Can you suggest a better way to express these competing goals?

    They fixed it in 2012 but I recommend that you recommend that MAX datatypes should be avoided unless they are absolutely necessary. I know that goes along with using the smallest datatype possible but using MAX in anything less than 2012 has some real index maintenance issues with rebuilding indexes in an ONLINE fashion.

    We can't rebuild indexes online because we don't have enterprise SQL. Do you think this is still a problem?

    I'd specifically specify that you should make the clustered index UNIQUE along with the following because it has some very real performance implications.

    4. Make clustered indexes monotonically increasing whenever possible

    Good point.

    You might want to specify that you shouldn't blindly make indexes because they can really bog down inserts, updates, and deletes.

    Also a good point. We have had problems with this on a few tables before.

    The following should probably be changed to "DO NOt USE QUERY OR TABLE HINTS WITHOUT PRIOR APPROVAL FROM THE DBA FOR EACH USE!".

    7. Use Query hints sparingly

    I like yours better 🙂

    The following is a pretty bad misconception. It truly depends on what is being done for performance purposes. Both temp Tables and Table Variables start out in memory and both overflow to disk if they get too big. Table Variables are also evaluated as having just one row unless you do a recompile after they're loaded and they do not use statistics at all. Further, sectional troubleshooting where non-persistent data tables exist is a real bitch in SSMS. Consdier rewording this to say something like "Use Temp Tables in favor of Table Variables except for when the use of Table Variables provides a substantial increase in performance or the code doesn't work because of the Temp Tables when being called from external sources."

    3. Use table variable for small data sets and temp tables for larger ones

    I've read your's and others posts on table variable vs. temp table and that's exactly I was going after. What about

    "Prefer temp tables over table variables unless there is a clear performance advantage."

    and we'll flesh out the details in the body of the text that will go with each item?

    The following is a bit confusing. I believe what you mean to say is to not intermingle DDL within the DML of a stored procedure.

    1. Do not use both DML and DDL on temp tables within a stored procedure

    Yup, that's what I was after.