Gotcha! SQL Aggregate Functions and NULL

  • Mike C

    SSC-Insane

    Points: 23224

    Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/mcoles/gotchasqlaggregatefunctionsandnull.asp

  • jwainz

    Right there with Babe

    Points: 739

    I learned to use COUNT(1) instead of COUNT(*), being told it was faster. This should return the same result as COUNT(*), unaffedted by NULLs.

  • Mike C

    SSC-Insane

    Points: 23224

    Thanks for the feedback!

    I'm not sure that COUNT(1) is any faster than COUNT(*).  If you look at the query plan for the sample query in the article:

    SELECT COUNT(*) AS NULLRows

    FROM [Northwind].[dbo].[suppliers]

    It turns out it's exactly the same as the query plan for the modified query:

    SELECT COUNT(1) AS NULLRows

    FROM [Northwind].[dbo].[suppliers]

    If you take a look at the query plans, pay special attention to the "Stream Aggregate/Aggregate Step".  With both query plans, the argument for this step is "[exprnnnn]=COUNT(*)".  It looks as if SQL Server just converts "COUNT(1)" into "COUNT(*)" for you.

    There might be a speed difference in choosing COUNT(*) or COUNT(column) depending on your table indexes.  COUNT(*) allows SQL Server to automatically choose the best index for the job.  If you think SQL Server is not choosing the best index for the job, you can always specify COUNT(column) to force SQL Server to re-consider its index usage.

    Even if there were a slight savings in speed with using COUNT(1), I would recommend sticking with the ANSI-defined syntax, COUNT(*).  There's no guarantee that non-ANSI syntax will work on other platforms; or even different versions of the same platform.

  • Merrill Aldrich

    SSCrazy

    Points: 2142

    Great article - thank you for composing such a clear and simple explanation.

  • Mike C

    SSC-Insane

    Points: 23224

    Thanks for the feedback!  I'm glad you found it helpful.

  • Kumaravel-229312

    SSC-Addicted

    Points: 440

    Simple and easily understandable

    Thanks. Expecting more articles like this

     

  • Mike C

    SSC-Insane

    Points: 23224

    Thanks, your posts help me refine the style I will use in future article submissions.  I appreciate your feedback!

  • sushila

    SSC-Dedicated

    Points: 35293

    Mike - very comprehensive and love the presentation - simple and direct!







    **ASCII stupid question, get a stupid ANSI !!!**

  • Mike C

    SSC-Insane

    Points: 23224

    Thanks, I'm glad you enjoyed it!

  • Mike C

    SSC-Insane

    Points: 23224

    Someone e-mailed me a question directly (name and address withheld on request).  The question was a good one, so I thought I'd address it with a post here:

    "Why would you assume NULL values are zeroes [by using COALESCE(column, 0)] in your AVG() function calls?  It seems like that would throw your answer even further off!"

    That's an excellent question!  On the face of it, it doesn't appear to make much sense for normal averaging.  But bear with me as I walk through this scenario:

    You have a room with 20 football players in it, and you have to calculate the average weight for the room.  If 10 players refuse to give their weight (i.e., NULL), how do you estimate the average?

    Using AVG(weight) and eliminating NULLs gives us an average at a point in time when we only have half the data.  This is standard practice, and it is 'precise', but it is not 'accurate'.  The fact that it is not accurate should be noted on reports generated using this data.  After all, this average accounts for only 50% of the players in the room; and our final result, once we get the remaining players' weights could be heavily weighted (no pun intended) in one direction or the other.  I.e., if our average is 150 pounds with 10 players responding, it could very well jump to 250 or higher by the time we get everyone's weights included in the final calculation.

    Can we be more accurate than this?  The answer is yes, but we will become less 'precise' in the process.

    Let's say that we have a tid-bit of information about the football players in the room.  We are told in advance that nobody is over 300 pounds, and we determine (obviously) that nobody is less than 0 pounds.  Since this is a football team, management wants all the players to be bulky; therefore, the greater the weight, the better.  These are our best-case/worst-case scenarios.  Using these two numbers, with COALESCE(), we can come up with best-case/worst-case scenario averages:

    SELECT AVG(COALESCE(weight, 300)) AS BestCase

    SELECT AVG(COALESCE(weight, 0)) AS WorstCase

    This will give us a range that we know our final average will fall between.  We can make our answer more precise if we can narrow our limits (i.e., no one weighs less than 100 pounds).

    We actually use these type of range calculations all the time without even thinking about them.  When someone asks how much something costs ("$5 to $10"), how much profit the company will make this year ("between $1.6 mil. and $1.8 mil."), or what time will we arrive ("between 5:00 and 5:30").

    So to answer the question, AVG(COALESCE()) is useful when trying to determine best-case average, worst-case average, or when determining a range for our averages.

  • Tim Rick

    SSC Rookie

    Points: 38

    Great Article. Would it be possible if you can further discuss the class of analytic functions from the SQL-99 OLAP extension? They will be introduced in SS2K5. SS2K5 apparently still won't support nulls first/last in Order By clause like Oracle/DB2, it can be quie agonizing to deal with nulls for Rank, Row_Number, etc.

  • Mike C

    SSC-Insane

    Points: 23224

    Thanks for the comments Rick!  OLAP is going to be a very hot topic and I imagine we'll see a ton of articles on it in the near future.  I imagine someone will beat me to the punch on this one, but if not I wouldn't be averse to pulling something together down the road.

  • Jesper-244176

    SSCertifiable

    Points: 7032

    Mike, you consider count(*) a special case and write that "count(column) eliminates null values". jwainz has a point in mentioning that count(*) and count(1) are equivalent, since count(1) is not a special case. The (calculated, or is it derived?) column which is always 1 (the one you get by writing select 1 from table) is never null and thus count(1) equals the number of rows in your table by your rule "count(column) eliminates null values".

    I agree with you in considering count(*) a special case, but sometimes it is good to see things from a different perspective

    I am a bit surprised to hear that count(1) is not part of the ANSI syntax. Not that I know much about it, but I would think that you could calculate count, min and max on just about anything (and sum and avg on numeric columns) and that this would be part of the standard.

    Thanks for the article, I look forward to the next one from you...

     

    Added: I didn't realise that this article was 8 months old when I saw it on the front page of sqlservercentral.com and proceeded to the discussion. My apologies to everyone...

  • Mike C

    SSC-Insane

    Points: 23224

    No problem It's been a while, but I think the discussion on the COUNT(*) vs. COUNT(1) was geared more towards performance. If I recall correctly, jwainz was saying something to the effect that COUNT(1) performed better than COUNT(*). I'm not sure that's the case, but I would definitely advise sticking with the COUNT(*) syntax since its behavior is specifically defined by the ANSI standard. Specific behavior for COUNT(1) wasn't defined in the SQL92 standard, which was why I recommended avoiding it.

  • Steve Rosenbach

    SSCrazy

    Points: 2041

    Thanks Mike!

    If every article was as well-written and had as good examles as yours, I'd be an expert on everything.

    Seriously, you took an important topic and shed more light on it than I've seen done before.

    Best regards,

    SteveR

Viewing 15 posts - 1 through 15 (of 32 total)

You must be logged in to reply to this topic. Login to reply