• It is irritating how big the gap is between the performance here and high-performance code, though. Why should we have to be impressed that multiplying a million numbers takes 800 ms? A simplistic implementation of the same process, applying a function call, in C# clocks in at 0.8 ms on my old laptop. That is 1000x faster. Obviously, there are reasons for some slowdown, but this magnitude is hard to accept.

    Also skipped here is simply using a CLR-implemented function. Something like:

    [Microsoft.SqlServer.Server.SqlFunction(

    DataAccess = DataAccessKind.None, IsDeterministic = true,

    IsPrecise = false, SystemDataAccess = SystemDataAccessKind.None

    )]

    public static SqlInt32 Mult2(SqlInt32 n)

    {

    return n * 2;

    }

    This gives much better performance.

    My measurements on a modern machine, running SQL Server 2012:

    Baseline (questionable due to optimization, but...):

    414 ms.

    150 ms - w/o SET STATISTICS

    TSQL Function:

    2403 ms.

    1653 ms - w/o SET STATISTICS

    iSF Function:

    263 ms.

    203 ms - w/o SET STATISTICS

    CLR Function:

    349 ms.

    346 ms - w/o SET STATISTICS

    While the iSF beats the CLR function, the CLR version can be used just like the normal TSQL one.

    The optimizer could be ruining all of these measurements, of course.

    And as I started with, all of this is orders of magnitude away from simple code for the same task.