• There are a number of differences, but the three most important are:

    1. In the Compute Scalar immediately above the Clustered Index Scan, the plan with the extra GROUP BY calculates:

    [Expr1004] = Scalar Operator(datename(month,[tempdb].[dbo].[#MyHead].[SomeDateTime]))

    [Expr1005] = Scalar Operator(datepart(month,[tempdb].[dbo].[#MyHead].[SomeDateTime]))

    whereas the plan with the single GROUP BY only calculates:

    [Expr1004] = Scalar Operator(datename(month,[tempdb].[dbo].[#MyHead].[SomeDateTime]))

    2. The extra DATEPART computation may not seem much, but it is repeated 10,000,000 times. The extra expression also increases the total amount of data flowing into the Hash Match (Aggregate) that comes next. Again, the per-row size of the DATEPART expression (integer) is only four bytes, but multiplied by ten million, it adds up. The total row size estimates show an increase from 467MB to 505MB, which is 38MB or 8% extra.

    3. The Hash Match aggregate implements the GROUP BY expression(s). The double GROUP BY, as well as having an extra 38MB of input, performs a more complex hash match on [Expr1004],[Expr1005] versus [Expr1004] alone.

    Of course it is pretty obvious to a human that DATEPART and DATENAME in this context will group in exactly the same way (they are based on the same date, and use the same MONTH granularity). Each possible value of DATEPART is associated with exactly one value of DATENAME. This one-to-one mapping means that we only need to group on DATEPART or DATENAME, not both.

    The query optimizer does not include logic to reason that way, so it performs the literal double grouping. (Another consideration here is that DATENAME is non-deterministic).

    I hope that helps.

    Paul