Performance issue with tally solution

  • Hello everybody!

    I have a performance issue with a set based solution. Please could you advice me what I'm doing wrong?

    I try to split text into lines by CRLF and it seems that my tally solution is at least twice slower than a cursor based solution if the text length is more than some characters.

    Here my test environment:

    SET NOCOUNT ON

    -- //////////////////////////////////////////////////////////

    -- -> Temporary procedure to spilit into lines

    IF (OBJECT_ID('tempdb..#usp_print_lines') IS NOT NULL)

    DROP PROCEDURE #usp_print_lines

    GO

    CREATE PROCEDURE #usp_print_lines

    @text NVARCHAR(MAX)

    AS

    DECLARE @ret TABLE (id INT IDENTITY, line NVARCHAR(MAX))

    DECLARE @pos INT, @next INT, @crlf CHAR(2)

    SELECT @pos = 1, @crlf = CHAR(13) + CHAR(10)

    WHILE (1 = 1)

    BEGIN

    SELECT @next = CHARINDEX(@crlf, @text, @pos)

    IF (@next = 0) BREAK -- Nothing more to do

    IF (@pos != @next)

    INSERT INTO @ret SELECT SUBSTRING(@text, @pos, @next - @pos)

    SELECT @pos = @next + 1

    END

    SELECT line FROM @ret ORDER BY id -- Return lines

    GO

    -- <- Temporary procedure to spilit into lines

    -- //////////////////////////////////////////////////////////

    DECLARE @now DATETIME, @duration INT, @crlf CHAR(2), @count INT

    DECLARE @tally TABLE (N INT NOT NULL, PRIMARY KEY CLUSTERED (N))

    DECLARE @source TABLE (name NVARCHAR(128), definition NVARCHAR(MAX))

    DECLARE @result TABLE (line nvarchar(max))

    SELECT @crlf = CHAR(13) + CHAR(10)

    -- //////////////////////////////////////////////////////////

    -- -> Test data and tally table

    -- Get some system procedures to split into lines

    INSERT INTO @source

    SELECT TOP(200) o.name, @crlf + m.definition + @crlf

    FROM master.sys.all_objects o

    JOIN master.sys.all_sql_modules m ON o.object_id = m.object_id

    WHERE type = 'P'

    INSERT INTO @tally

    SELECT TOP(100000) ROW_NUMBER() OVER (ORDER BY c1.column_id)

    FROM master.sys.all_columns c1 CROSS JOIN master.sys.all_columns c2

    -- <- Test data and tally table

    -- //////////////////////////////////////////////////////////

    -- //////////////////////////////////////////////////////////

    -- -> Tally solution

    PRINT 'Start tally solution'

    SELECT @now = GETDATE()

    -- Split text into lines

    INSERT INTO @result

    SELECT l.line

    FROM @source s

    CROSS APPLY (SELECT TOP(1000) SUBSTRING(s.definition, N + 2, CHARINDEX(@crlf, s.definition, N + 1) - N - 2) line

    FROM @tally

    WHERE N < LEN(s.definition) - 1

    AND SUBSTRING(s.definition, N, 2) = @crlf

    ORDER BY N

    ) l

    -- Results

    SELECT @duration = DATEDIFF(MILLISECOND, @now, GETDATE())

    SELECT @count = COUNT(*) FROM @result

    PRINT 'Milliseconds: ' + CONVERT(VARCHAR(10), @duration) + ' | Lines: ' + CONVERT(VARCHAR(10), @count)

    -- <- Tally solution

    -- //////////////////////////////////////////////////////////

    DELETE FROM @result -- Clean up

    -- //////////////////////////////////////////////////////////

    -- -> Cursor solution

    PRINT 'Start cursor solution'

    SELECT @now = GETDATE();

    -- Create another table and copy all data...

    DECLARE @procs table (name nvarchar(128), definition NVARCHAR(MAX))

    INSERT INTO @procs

    SELECT name, definition

    FROM @source

    DECLARE @name NVARCHAR(128)

    DECLARE @definition NVARCHAR(MAX)

    -- Loop through all data and split into lines

    WHILE EXISTS (SELECT TOP(1) 1 FROM @procs)

    BEGIN

    SELECT TOP(1) @name = name, @definition = definition FROM @procs

    -- Split current text into lines

    INSERT INTO @result

    EXECUTE #usp_print_lines @definition

    DELETE FROM @procs WHERE name = @name

    END

    -- Results

    SELECT @duration = DATEDIFF(MILLISECOND, @now, GETDATE())

    SELECT @count = COUNT(*) FROM @result

    PRINT 'Milliseconds: ' + CONVERT(VARCHAR(10), @duration) + ' | Lines: ' + CONVERT(VARCHAR(10), @count)

    -- <- Cursor solution

    -- //////////////////////////////////////////////////////////

    My results on two different environments

    1st system

    System:

    Microsoft SQL Server 2008 (SP1) - 10.0.2531.0 (X64)

    Mar 29 2009 10:11:52

    Copyright (c) 1988-2008 Microsoft Corporation

    Developer Edition (64-bit) on Windows NT 6.0 (Build 6001: Service Pack 1)

    Result:

    Start tally solution

    Milliseconds: 5140 | Lines: 27132

    Start cursor solution

    Milliseconds: 1946 | Lines: 28145

    2nd system

    System:

    Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86)

    Feb 9 2007 22:47:07

    Copyright (c) 1988-2005 Microsoft Corporation

    Developer Edition on Windows NT 5.2 (Build 3790: Service Pack 2)

    Result:

    Start tally solution

    Milliseconds: 3483 | Lines: 25728

    Start cursor solution

    Milliseconds: 1626 | Lines: 25728

    I'm quiet sure that I'm doing something wrong, but I don't see what :ermm:

    Thank you for all suggestions!

    Flo

    PS: I try to answer as quick as possible but I currently have huge problems with my internet connection...

  • All I did was confirm your code is essentially the same on my SQL2005/1Gig ram 2.8 machine.

    My first guess was because you were using a @table variable for your Tally table, I've always heard that @table vars were good only when they had fewer rows.

    I ran your code, then substituted my permenant tally table...identical results. second pass saved a few milliseconds due to caching I assume.

    /*

    Start tally solution

    Milliseconds: 3733 | Lines: 25987

    Start cursor solution

    Milliseconds: 2140 | Lines: 26922

    second pass:

    Start tally solution

    Milliseconds: 3703 | Lines: 25987

    Start cursor solution

    Milliseconds: 1983 | Lines: 26922

    */

    I've just got to assume that although a Tally table is awesome, there will be situations where it does not perform as well as other methods.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Flo: I'm not at my laptop right now, so I can't test your code. I'll do so at the earliest opportunity. Couple of questions though:

    Why are you selecting the top (1000)?

    WHERE N < len(s.definition)-1 should handle that.

    ORDER BY N? Does it make any difference to the output?

    Why table variable @tally?

    __________________________________________________

    Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
    Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills

  • Hey Flo!

    It is true to say that the following features of the script won't help overall performance:

    1. Using table variables (no statistics are available and parallelism cannot be used)

    2. Using large object types (MAX)

    3. Using Unicode

    4. @crlf is defined as CHAR rather than NCHAR (implicit conversion)

    Nevertheless, all that is rather beside the point here.

    The tally or numbers table solution is, to an extent, a brute-force approach. In the case of many long strings where the frequency of the searched-for string is low, a solution based on a loop/CHARINDEX approach should out-perform the tally approach.

    Searching a 1000-row table containing strings of 500K characters for example:

    The tally method will execute a SUBSTRING 1000 * 500K times, trying to find the search string.

    A method based on CHARINDEX will execute Sigma (Ni+1) times, where Ni is the number of occurrences of the search string in row i.

    Of course CHARINDEX is a character-by-character search from the starting position, but even without doing the math rigorously, it is apparent that there will always be a point where the tally method will be slower.

    In your particular example, it helps if the source table contains an extra column containing the length of the string, but the CHARINDEX method is still faster.

    This code shows how including an extra length column helps the optimizer:

    DECLARE @crlf NCHAR(2)

    SELECT @crlf = CHAR(13) + CHAR(10)

    SELECTSUBSTRING(s.definition, N + 2, CHARINDEX(@crlf, s.definition, N + 1) - N - 2)

    FROM#source AS S

    JOIN#tally AS T

    ON (T.N BETWEEN 1 AND S.Length - 2)

    WHERESUBSTRING(s.definition, N, 2) = @crlf

    ORDERBY N

  • Florian Reischl (4/12/2009)


    Hello everybody!

    I have a performance issue with a set based solution. Please could you advice me what I'm doing wrong?

    ORDER BY on unindexed Tally table, for starters. All the other comments about large table variables may apply, as well, although I haven't tested the code, yet.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Sratch the above... I found the index. Still looking.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (4/12/2009)


    ORDER BY on unindexed Tally table, for starters. All the other comments about large table variables may apply, as well, although I haven't tested the code, yet.

    Unindexed? :unsure:

    Flo's code:


    DECLARE @tally TABLE (N INT NOT NULL, PRIMARY KEY CLUSTERED (N))

    edit: overlapped with Jeff's latest 🙂

  • Jeff Moden (4/12/2009)


    Florian Reischl (4/12/2009)


    Hello everybody!

    I have a performance issue with a set based solution. Please could you advice me what I'm doing wrong?

    ORDER BY on unindexed Tally table, for starters. All the other comments about large table variables may apply, as well, although I haven't tested the code, yet.

    Wow! Nicely done, Flo... I haven't figured it out, yet. Looks like you may have figured out a way to beat the Tally table.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Flo... there's a small error in your temporary stored procedure. It was leaving a leading space in the results. I've repaired it in the following but it still beats the tar out of the Tally table solution. Guess the Tally table pretty much sucks on the big stuff.

    --===== Procedure to split line

    IF (OBJECT_ID('tempdb..#usp_print_lines') IS NOT NULL)

    DROP PROCEDURE #usp_print_lines

    GO

    CREATE PROCEDURE #usp_print_lines

    @text NVARCHAR(MAX)

    AS

    DECLARE @ret TABLE (id INT IDENTITY, line NVARCHAR(MAX))

    DECLARE @pos INT, @next INT, @crlf NCHAR(2)

    SELECT @pos = 1, @crlf = CHAR(13) + CHAR(10)

    WHILE (1 = 1)

    BEGIN

    SELECT @next = CHARINDEX(@crlf, @text, @pos)

    IF (@next = 0) BREAK --- Nothing more to do

    IF (@pos <> @next)

    INSERT INTO @ret

    SELECT SUBSTRING(@text, @pos + 1, @next - @pos -1)

    SELECT @pos = @next + 1

    END

    SELECT line FROM @ret ORDER BY id --- Return lines

    GO

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Hi

    Thank you for all this feedback!

    ...hope my internet stays alive since I answered all the questions...

    Greets

    Flo

  • Bob Hovious (4/12/2009)


    Flo: I'm not at my laptop right now, so I can't test your code. I'll do so at the earliest opportunity. Couple of questions though:

    Why are you selecting the top (1000)?

    WHERE N < len(s.definition)-1 should handle that.

    ORDER BY N? Does it make any difference to the output?

    Why table variable @tally?

    Hi Bob

    Thanks for your help!

    Why are you selecting the top (1000)?

    The TOP(1000) is just to able to use the ORDER BY to ensure that the returned lines will not be swapped.

    ORDER BY N? Does it make any difference to the output?

    I'm not sure if a sequential result is ensured if not ordered. It is important that no lines will be swapped.

    Why table variable @tally?

    The variable @tally was only to ensure everything is available for the sample. I also tried with a usual Tally table, same result.

    Greets

    Flo

  • Hey again Flo,

    You might like to also try this with a CLR TVF.

    I tried it on your sample and got:

    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table '#source'. Scan count 1, logical reads 77, physical reads 0, read-ahead reads 0, lob logical reads 3803, lob physical reads 0, lob read-ahead reads 2381.

    SQL Server Execution Times:

    CPU time = 1422 ms, elapsed time = 1940 ms.

    A real C# dev will be able to improve my attempt at the code:

    (the code tags have broken a couple of things 'AddAdd' and a cast, but hopefully you will get the gist...)

    public partial class UserDefinedFunctions

    {

    [Microsoft.SqlServer.Server.SqlFunction(

    FillRowMethodName = "FillRow",

    DataAccess = DataAccessKind.None,

    SystemDataAccess = SystemDataAccessKind.None,

    Name = "tfn_Split",

    TableDefinition = "row_id int, result nvarchar(max)")]

    public static IEnumerable tfn_Split

    (

    [SqlFacet(IsNullable = false, MaxSize = -1, IsFixedLength = false)]

    SqlString StringToSplit,

    [SqlFacet(IsNullable = false, MaxSize = 4000, IsFixedLength = false)]

    SqlString Delimiter

    )

    {

    string toSplit = StringToSplit.Value;

    string delimeter = Delimiter.Value;

    string[] items = toSplit.Split(new string[] { delimeter }, StringSplitOptions.RemoveEmptyEntries);

    Dictionary (items.Length);

    for (int i = 0; i < items.Length; i++)

    {

    results.Add(i + 1, items);

    }

    return results;

    }

    public static void FillRow(Object obj, out SqlInt32 row_id, out SqlString result)

    {

    KeyValuePair )obj;

    row_id = new SqlInt32(kvp.Key);

    result = new SqlString(kvp.Value);

    }

    };

    The CLR version runs fastest on my laptop, has the simplest plan (obviously) and starts to return results immediately, which is nice.

    For completeness, this is the SQL:

    SELECTS.row_id, TVF.row_id, TVF.result

    FROM#source AS S

    CROSS

    APPLYdbo.tfn_Split(S.definition, CHAR(13) + CHAR(10)) AS TVF

    ORDERBY

    S.row_id, TVF.row_id;

    Cheers,

    Paul

  • Hey Paul!

    1. Using table variables (no statistics are available and parallelism cannot be used)

    Was only for the sample to make everything available. I have a real tally table in my database. The performance is better with the real table but worse than the loop.

    2. Using large object types (MAX)

    3. Using Unicode

    Source data are NVARCHAR(MAX) and it is required...

    4. @crlf is defined as CHAR rather than NCHAR (implicit conversion)

    Absolutely correct, thank you! I just corrected.

    Nevertheless, all that is rather beside the point here.

    The tally or numbers table solution is, to an extent, a brute-force approach. In the case of many long strings where the frequency of the searched-for string is low, a solution based on a loop/CHARINDEX approach should out-perform the tally approach.

    Searching a 1000-row table containing strings of 500K characters for example:

    The tally method will execute a SUBSTRING 1000 * 500K times, trying to find the search string.

    A method based on CHARINDEX will execute Sigma (Ni+1) times, where Ni is the number of occurrences of the search string in row i.

    Same as I think. The tally split function seems really handy and fast for sets of strings wich are not too large. For really large strings a simple character cursor seems to be a better solution. I think a split function is still one of the best approaches for a CLR function 😉

    Greets

    Flo

  • Hi Jeff!

    Wow! Nicely done, Flo... I haven't figured it out, yet. Looks like you may have figured out a way to beat the Tally table.

    I'm not sure if "nice" should be the correct word for "The known slow approach seems to be the faster solution" 😀

    Thanks also for the bug fix in the procedure!

    Greets

    Flo

  • I know Jeff will shoot me for this... 🙂

    I did some other tests and finally found a high performant solution with CLR. Due to the fact that CLR functions do not support table valued functions I had to handle with XML data type.

    Since now the function is not really optimized but it beats all other solutions!

    CLR function

    using System;

    using System.Data;

    using System.Data.SqlClient;

    using System.Data.SqlTypes;

    using System.IO;

    using System.Xml;

    using Microsoft.SqlServer.Server;

    public partial class UserDefinedFunctions

    {

    [Microsoft.SqlServer.Server.SqlFunction]

    public static System.Data.SqlTypes.SqlXml ufn_clr_SplitLines(SqlChars textIn)

    {

    string text = new string(textIn.Value);

    string[] lines = text.Split(new string[] { "\r" }, StringSplitOptions.None);

    int capacity = text.Length * 2;

    MemoryStream strm = new MemoryStream(capacity);

    XmlWriter sw = XmlWriter.Create(strm);

    sw.WriteStartDocument();

    sw.WriteStartElement("Root");

    for (int i = 0; i < lines.Length; i++)

    {

    string line = lines;

    sw.WriteElementString("Item", line);

    }

    sw.WriteEndElement();

    sw.WriteEndDocument();

    sw.Flush();

    strm.Position = 0;

    SqlXml sqlXml = new SqlXml(strm);

    // Put your code here

    return sqlXml;

    }

    };

    Additional test block in TSQL

    DELETE FROM @result -- Clean up

    -- //////////////////////////////////////////////////////////

    -- -> Clr xml solution

    PRINT 'Start clr xml solution'

    SELECT @now = GETDATE()

    -- Split text into lines

    INSERT INTO @result

    SELECT l.line

    FROM @source s

    CROSS APPLY (SELECT

    ISNULL(T.C.value('(./text())[1]', 'nvarchar(1000)'), '') line

    FROM (SELECT dbo.ufn_clr_SplitLines(s.definition) col1) x

    CROSS APPLY x.col1.nodes('/Root/Item') T(C)

    ) l

    -- Results

    SELECT @duration = DATEDIFF(MILLISECOND, @now, GETDATE())

    SELECT @count = COUNT(*) FROM @result

    PRINT 'Milliseconds: ' + CONVERT(VARCHAR(10), @duration) + ' | Lines: ' + CONVERT(VARCHAR(10), @count)

    -- <- Clr xml solution

    -- //////////////////////////////////////////////////////////

    Results

    Start clr xml solution

    Milliseconds: 783 | Lines: 28545

    @jeff

    The day to activate CLR on your servers is close 😀

    Greets

    Flo

Viewing 15 posts - 1 through 15 (of 522 total)

You must be logged in to reply to this topic. Login to reply