There Must Be 15 Ways To Lose Your Cursors… Part 2: Just Put It in a S

  • Mike C (7/1/2010)


    One note on your update--be careful when using this syntax:

    UPDATE t

    SET ...

    FROM table t

    INNER JOIN table2 t2

    ON ...

    This syntax lends itself to parallelism and can result in unpredictable "intra-query parallelism deadlock event". I just fixed 4 or 5 production queries for a bank with this very problem.

    I use this syntax a lot, and have never run into a problem with the parallelism, but it is good to know in any case. I just read the MSDN post you linked to and saw: "...Most intra-query parallelism deadlocks are considered bugs, although some of them can be risky bugs to fix so a fix may not be possible. If you run into one and you're already on the latest SQL service pack, your best bet may be to investigate workarounds. Luckily, this type of deadlock is relatively uncommon..." So that explains why I haven't seen the problem it is rare and considered a bug.

    In some cases you have the multiple matches and you don't care which gets applied last. I don't know performance wise if it would be better to spend the time to uniquify the data first, or to suffer the penalty of the extra writes. (Most of the time they have been one-off UPDATE queries so I didn't bother.)

  • Wow. Thanks for all the extra info, Mike.

    Much appreciated!

  • Mike C (7/1/2010)


    One note on your update--be careful when using this syntax:

    UPDATE t

    SET ...

    FROM table t

    INNER JOIN table2 t2

    ON ...

    This syntax lends itself to parallelism and can result in unpredictable "intra-query parallelism deadlock event". I just fixed 4 or 5 production queries for a bank with this very problem.

    So does your method of using FROM not run into this problem?

    UPDATE dbo.RouteFareHistoryWeek

    SET

    -snip-

    FROM #Temp t

    WHERE dbo.RouteFareHistoryWeek.FriDate = @TodaysDate

    AND dbo.RouteFareHistoryWeek.RouteID_FK = t.[Route]

    Isn't that essentially an INNER JOIN or the same code?

  • UMG Developer (7/1/2010)


    Mike C (7/1/2010)


    One note on your update--be careful when using this syntax:

    UPDATE t

    SET ...

    FROM table t

    INNER JOIN table2 t2

    ON ...

    This syntax lends itself to parallelism and can result in unpredictable "intra-query parallelism deadlock event". I just fixed 4 or 5 production queries for a bank with this very problem.

    So does your method of using FROM not run into this problem?

    UPDATE dbo.RouteFareHistoryWeek

    SET

    -snip-

    FROM #Temp t

    WHERE dbo.RouteFareHistoryWeek.FriDate = @TodaysDate

    AND dbo.RouteFareHistoryWeek.RouteID_FK = t.[Route]

    Isn't that essentially an INNER JOIN or the same code?

    Yes, in this example and in the update you posted the UPDATE actually falls into the category of one of the ones that would require more than a dozen nearly identical scalar subqueries to make it standard. According to MS the optimizer should be able to take advantage of this.

    CREATE TABLE #Destination

    (

    ID INT NOT NULL,

    VAL INT

    );

    GO

    WITH CTE

    AS

    (

    SELECT 1 AS ID,

    CHECKSUM(NEWID()) AS VAL

    UNION ALL

    SELECT ID + 1,

    CHECKSUM(NEWID())

    FROM CTE

    WHERE ID < 100000

    )

    INSERT INTO #Destination

    (

    ID,

    VAL

    )

    SELECT ID,

    VAL

    FROM CTE

    OPTION (MAXRECURSION 0);

    GO

    CREATE TABLE #Source

    (

    ID INT NOT NULL,

    VAL INT NOT NULL

    );

    GO

    WITH CTE

    AS

    (

    SELECT 1 AS N,

    ABS(CHECKSUM(NEWID()) % 100000) AS ID,

    CHECKSUM(NEWID()) AS VAL

    UNION ALL

    SELECT N + 1,

    ABS(CHECKSUM(NEWID()) % 100000),

    CHECKSUM(NEWID())

    FROM CTE

    WHERE N < 1000000

    )

    INSERT INTO #Source

    (

    ID,

    VAL

    )

    SELECT ID,

    VAL

    FROM CTE

    OPTION (MAXRECURSION 0);

    GO

    UPDATE d

    SET d.VAL = s.VAL

    FROM #Destination d

    INNER JOIN #Source s

    ON d.ID = s.ID;

    UPDATE #Destination

    SET VAL = s.VAL

    FROM #Source s

    WHERE #Destination.ID = s.ID;

    UPDATE #Destination

    SET VAL =

    (

    SELECT MAX(s.VAL)

    FROM #Source s

    WHERE s.ID = #Destination.ID

    );

    GO

    DROP TABLE #Source;

    DROP TABLE #Destination;

    GO

    As for your other point, I can only think of two situations where it wouldn't matter which row updated last when there are duplicates in the source table:

    1. You're generating test data and really don't care what's generated.

    2. Your source data represents duplicates, so the result will be the same no matter which one updates last.

    In the case of #2 eliminating duplicates before the update will probably make the update more efficient overall.

    Mike C

  • Mike C (7/1/2010)


    Jeff Moden (7/1/2010)


    Mike C (7/1/2010)


    Good catch on the week # UMG. As you mentioned, I did assume 2005 or 2008, but only because 2000 is out of support :). If using 2000, he can easily turn the CTEs into SELECT INTO #tempTable statements. I'm sure there's plenty of opportunity for optimization here as well, if we had some sample data and expected outputs.

    One note on your update--be careful when using this syntax:

    UPDATE t

    SET ...

    FROM table t

    INNER JOIN table2 t2

    ON ...

    This syntax lends itself to parallelism and can result in unpredictable "intra-query parallelism deadlock event". I just fixed 4 or 5 production queries for a bank with this very problem.

    Thanks

    Mike C

    Now THAT's interesting. What was the fix, Mike? MAXDOP?

    That was our DBA's quick-fix but it turned a 1 hour stored proc into a 5 hour stored proc. In some cases you can tweak the indexes on the tables to prevent table scans of the input tables, which is the underlying cause of the parallelism. In this case joins were already on clustered PK. My fix was to rewrite the query completely using ANSI correlated subquery syntax. Cut the run-time down to about 12 mins and uncovered a business logic flaw when I changed it (multiple matching rows in the second table).

    Mike C

    The only time that I've seen a problem is when you have a joined update but have neglected to include the target table in the FROM clause. I've seen it take a 20 second update and slam 4 CPU's into the wall for over 2 hours. The offending code will usually look something like the following...

    UPDATE table1

    SET ...

    FROM table2 t2

    WHERE table1.somecolumn = t2.somecolumn

    Notice in the above that table1 is being used in an old style join but table1 is not in the FROM clause.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Hi,

    I've had two situations where cursors were involved and I'm interested in what you think of them.

    The first was as a business analyst at an internet retail company that was using Oracle. We periodically had to do database updates that sometimes involved 30,000+ rows. The most efficient way to do that is with an update query. However, the DBA's would not allow this because, according to them, the SQL hogged resources and therefore caused a degradation in performance for online retail customers. So they told us to use a cursor that that included a commit after every 100 updates. This way if they decide to kill your update in the middle, half would be done and the other half could be resumed at a more database convenient time. Of course the cursor had to be designed to resume without redoing the updates that had already been completed. What do you think of this issue?

    The second one was for a company with a SQL Server 2005 database. In order, to create a hierarchy of production orders, the recursive CTE just didn't work because the hierarchical relationships were not very simple. So I had to use a recursive stored procedure. This worked well because in less than a tenth of a second I had a record set that defined the entire production order hierarchy. Since I was working from a VB.NET app, I only had to hit the database once. But circumstances were such that caused management to tell me to scrap the stored procedure. So I had to make the recursive CTE work. But this turned out to be slower because not all the lines of the top level production order were involved in the hierarchy. So I could come up with a reliable result only if I ran the recursive CTE when the production order line had the recursion indicator. Most of the time it didn't but there were times when I had to hit database twenty times. It seems to me that in this case the recursive stored procedure or cursor was the better approach. What do you think?

    I'm not sure I've adequately described the issues for you; so let me know if you need more info. Perhaps you can add one or two parts to your article to explain what you think is the best way to handle these situations.

    Mike

  • These articles are very well written, even an accidental DBA like myself can understand the logic in what you are describing. The actual examples also make it far clearer to understand the processes.

    I have one issue - in a bulk insert process - rather than the line by line cursor - if I have one bad record in my insert process all the records in the data set are rejected. Because of my limited skills I have recently written a cursor which loads the data record by record and any record that fails is written to an error table along with the error message.

    For instance a record may have a null value which is being loaded into a field that won't accept nulls.

    In my case data from four tables of information from four separate sources are being selectively combined and loaded into a single table in the database. There are a bunch of rules for the data in each field and if any field fails the record is rejected. The cursor allows me to write this record to a reject table along with the error message generated by SQL. A human then looks at the reject table to decide what action is needed

  • If you have to use CURSORS, then I have found that this code is a slightly better format to use, as it only requires you to enter the "FETCH NEXT" line once. I used to changed the one and forget to change the second one. Especially When the "-- Do Something" code get quite long

    DECLARE @holding Int

    DECLARE Temp_cursor CURSOR Forward_Only FOR

    Select TableColumn From dbo.TableName

    OPEN Temp_cursor

    WHILE (1=1)

    BEGIN

    FETCH NEXT FROM Temp_cursor into @holding

    If (@@FETCH_STATUS <> 0) Break

    -- Do Something

    END

    CLOSE Temp_cursor

    DEALLOCATE Temp_cursor

  • MikeAngelastro-571287 (7/1/2010)


    Hi,

    I've had two situations where cursors were involved and I'm interested in what you think of them.

    The first was as a business analyst at an internet retail company that was using Oracle. We periodically had to do database updates that sometimes involved 30,000+ rows. The most efficient way to do that is with an update query. However, the DBA's would not allow this because, according to them, the SQL hogged resources and therefore caused a degradation in performance for online retail customers. So they told us to use a cursor that that included a commit after every 100 updates. This way if they decide to kill your update in the middle, half would be done and the other half could be resumed at a more database convenient time. Of course the cursor had to be designed to resume without redoing the updates that had already been completed. What do you think of this issue?

    Batching a complex query that hits many rows into multiple transactions in order to reduce interference with interactive work requiring rapid response is a common technique, and it's a reasonable thing to do if the complex query has to run at a time when when the system is being used for OLTP. Using a row-by-row cursor within the individual transactions is usually eminently unreasonable. A better way to approach this is to limit the number of row updates handled in a single query to some sensible figure - perhaps 100, perhaps less, perhaps more, depending on how long the query takes - and have a loop which detects whether there is anything more to do. It always needs code to detect what has already been done and what hasn't (this can be pretty trivial if you add a table containg a description of where you are up to).

    The second one was for a company with a SQL Server 2005 database. In order, to create a hierarchy of production orders, the recursive CTE just didn't work because the hierarchical relationships were not very simple. So I had to use a recursive stored procedure. This worked well because in less than a tenth of a second I had a record set that defined the entire production order hierarchy. Since I was working from a VB.NET app, I only had to hit the database once. But circumstances were such that caused management to tell me to scrap the stored procedure. So I had to make the recursive CTE work. But this turned out to be slower because not all the lines of the top level production order were involved in the hierarchy. So I could come up with a reliable result only if I ran the recursive CTE when the production order line had the recursion indicator. Most of the time it didn't but there were times when I had to hit database twenty times. It seems to me that in this case the recursive stored procedure or cursor was the better approach. What do you think?

    A recursive trigger or SP can be a good solution sometimes, but you have to watch out for recursion depth limits and other nasty factors of the server. If there was any sort of risk that the depth could exceed the limits allowed by the server (perhaps because of future growth of the order hierarchy) it would be wrong to use the technique. The depth allowed in SQLS 2005 was very small: 32. Even worse, this wasn't the limit of recursion, it was the limit of SP or trigger nesting, so you might hit the limit somewhere before getting to a recursion depth of 32 because the SP called other (perhaps non-recursive) SPs and/or because it was called somewhere in a hierarchy of SP calls. Of course if the total nesting depth had no prospect of reaching near to 32 a recursive SP might have been the best solution.

    I'm not sure I've adequately described the issues for you; so let me know if you need more info. Perhaps you can add one or two parts to your article to explain what you think is the best way to handle these situations.

    Without knowing the unspecified circumstances that "were such that caused management to tell..." it's not possible to give any answer better that the generalised waffle I've written above.

    Tom

  • Wiztech Russ (7/1/2010)


    These articles are very well written, even an accidental DBA like myself can understand the logic in what you are describing. The actual examples also make it far clearer to understand the processes.

    I have one issue - in a bulk insert process - rather than the line by line cursor - if I have one bad record in my insert process all the records in the data set are rejected. Because of my limited skills I have recently written a cursor which loads the data record by record and any record that fails is written to an error table along with the error message.

    For instance a record may have a null value which is being loaded into a field that won't accept nulls.

    In my case data from four tables of information from four separate sources are being selectively combined and loaded into a single table in the database. There are a bunch of rules for the data in each field and if any field fails the record is rejected. The cursor allows me to write this record to a reject table along with the error message generated by SQL. A human then looks at the reject table to decide what action is needed

    As of SQL Server 2005, the BULK INSERT command of T-SQL allows you to name a file for such errors. Such a capability has also always existed with BCP. Both are incredibly quick at importing data.

    I never import data directly into a final table. There's just too much that can go wrong because of 3rd party data. Instead, I'll import into a staging table where I may or may not use the single row rejection features of BULK INSERT or BCP to reject such things a rows with misplaced nulls, etc. There are times where some very complex rules must be applied so I'll import them into a staging table and then exercise all the rules in one or more passes of set based code marking bad rows as the code goes. After that, it's simple... anything that's still marked as good at the end of the run goes to the final table. Anything that's marked bad goes to the "rework" table.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • All following is IMHO as you understand, of course.

    This article is weak. Too much words with lack of _real_ examples.

    To be exact:

    1. First eample with

    Select 1 From master.sys.columns c1 Cross Join master.sys.columns c2

    is a monster build exceptionally by imagination of author.

    2. Second example with

    PRINT @lastname + ', ' + @firstname

    and dramatic story about nameless developer IMHO is a tale developed by author himself with illustrative purposes. Anyhow (if story is true) nameless developer is obviously moron (in worst case) or has not enough skill. I have to mention that man surprised by fact then print does not return resultset or row in resultset will not grow to be real developer.

    It should be said that "drug" purposed to solve problem:

    Select LastName, Firstname

    From Sales.vIndividualCustomer

    is also wrong. This one returns recordset with 2 fields (lastname and firstname). Code written by nameless developer had to return recordset with 1 field containing concatenation of lastname, string literal ',' and firstname. It is not the same.

    3. Third example - using cursor for filtering records in resultset - is also weak enough with same reasons. Author wrote

    in fact, all of the examples used in this series will be based on real life examples

    I can hardly beleive this. I can hardly beleive that person knows SELECT and doesn't know WHERE. It can be only under following circumstances:

    3.1. Person is moron.

    3.2. Person has no skill and DOES NOT WANT TO GET ANY SKILL (3.1. ?).

    Such kind of person will never read ANY article even if article is FOR MORONS ONLY.

    4. To be more constructive, let me introduce real life example. Let's say you are stockbroker's progammer. Let's say that your clients make 100000 deal per day by 10000 clients and 60000 of them are made by your special client.

    4.1.Your aim is calculate commission for each deal by following rules:

    4.1.1. "Base" commission is calculated as deal_volume * rate. I.e. quantity_of_securities*deal_price*rate

    4.1.2. If running total (sum of 4.1.1. for previous deals for client in current deal) of 4.1.1. after current deal is more than some border_for_this_client then commission is 0 else commission equals 4.1.1.

    Problem is develope procedure to calculate commission for each deal. Set-based variant sucks because you have to calculate running total on 60000 records. It sucks if you have every index you can imagine. It sucks if you're moron and you don't know indexes at all.

    4.2. You have following possibilities:

    4.2.1. Use WHILE cycle with identity column

    declare @t table(

    id int identity primary key clustered

    ,client int

    ,price decimal(19,2)

    ,qty int

    ,commission decimal(19,2)

    )

    declare @id int, @client int, @commission decimal(19,2)

    select @id = min(id) from @t

    while @id is not null

    begin

    --calculating @commission for current deal of current client

    ...

    --updating base table

    update @t set commission = @commission where id = @id

    select @id = min(id) from @t where id > @id

    end

    4.2.2. Use WHILE with cursor

    4.2.3. Write your own procedural extension for SQL Server:

    4.2.3.1. Extended stored proc based on DLL

    4.2.3.2. Extended stored proc based on .NET

    4.2.2. works faster then 4.2.1. (could you check this by yourself at home?) By fact 4.2.1. does the same cycle but slower

    4.2.2. does not require C++ or C# writing with potentional bugs (oh, this is your FIRST extended proc?)

    What is you choice? Or what is your solution?

    And of course sorry for my English.

  • Jeff Moden (7/1/2010)


    Mike C (7/1/2010)


    Jeff Moden (7/1/2010)


    Mike C (7/1/2010)


    Good catch on the week # UMG. As you mentioned, I did assume 2005 or 2008, but only because 2000 is out of support :). If using 2000, he can easily turn the CTEs into SELECT INTO #tempTable statements. I'm sure there's plenty of opportunity for optimization here as well, if we had some sample data and expected outputs.

    One note on your update--be careful when using this syntax:

    UPDATE t

    SET ...

    FROM table t

    INNER JOIN table2 t2

    ON ...

    This syntax lends itself to parallelism and can result in unpredictable "intra-query parallelism deadlock event". I just fixed 4 or 5 production queries for a bank with this very problem.

    Thanks

    Mike C

    Now THAT's interesting. What was the fix, Mike? MAXDOP?

    That was our DBA's quick-fix but it turned a 1 hour stored proc into a 5 hour stored proc. In some cases you can tweak the indexes on the tables to prevent table scans of the input tables, which is the underlying cause of the parallelism. In this case joins were already on clustered PK. My fix was to rewrite the query completely using ANSI correlated subquery syntax. Cut the run-time down to about 12 mins and uncovered a business logic flaw when I changed it (multiple matching rows in the second table).

    Mike C

    The only time that I've seen a problem is when you have a joined update but have neglected to include the target table in the FROM clause. I've seen it take a 20 second update and slam 4 CPU's into the wall for over 2 hours. The offending code will usually look something like the following...

    UPDATE table1

    SET ...

    FROM table2 t2

    WHERE table1.somecolumn = t2.somecolumn

    Notice in the above that table1 is being used in an old style join but table1 is not in the FROM clause.

    I've seen the exact opposite 🙂 I witnessed the agony of the UPDATE...FROM source INNER JOIN target... on a 32-processor server. In most cases both syntaxes produce the same query plan, although I imagine there are cases where they could produce different query plans (such as adding additional joins to the mix or using an outer join instead of an inner join).

    What I found really interesting about the situations I saw related to this was that when they upgraded the hardware (better, faster, etc.), it made the problems with this syntax much worse. Queries that used to self-deadlock once a month suddenly deadlocked twice a week.

  • slavut (7/2/2010)


    4. To be more constructive, let me introduce real life example. Let's say you are stockbroker's progammer. Let's say that your clients make 100000 deal per day by 10000 clients and 60000 of them are made by your special client.

    4.1.Your aim is calculate commission for each deal by following rules:

    4.1.1. "Base" commission is calculated as deal_volume * rate. I.e. quantity_of_securities*deal_price*rate

    4.1.2. If running total (sum of 4.1.1. for previous deals for client in current deal) of 4.1.1. after current deal is more than some border_for_this_client then commission is 0 else commission equals 4.1.1.

    Problem is develope procedure to calculate commission for each deal. Set-based variant sucks because you have to calculate running total on 60000 records. It sucks if you have every index you can imagine. It sucks if you're moron and you don't know indexes at all.

    4.2. You have following possibilities:

    4.2.1. Use WHILE cycle with identity column

    declare @t table(

    id int identity primary key clustered

    ,client int

    ,price decimal(19,2)

    ,qty int

    ,commission decimal(19,2)

    )

    declare @id int, @client int, @commission decimal(19,2)

    select @id = min(id) from @t

    while @id is not null

    begin

    --calculating @commission for current deal of current client

    ...

    --updating base table

    update @t set commission = @commission where id = @id

    select @id = min(id) from @t where id > @id

    end

    4.2.2. Use WHILE with cursor

    4.2.3. Write your own procedural extension for SQL Server:

    4.2.3.1. Extended stored proc based on DLL

    4.2.3.2. Extended stored proc based on .NET

    4.2.2. works faster then 4.2.1. (could you check this by yourself at home?) By fact 4.2.1. does the same cycle but slower

    4.2.2. does not require C++ or C# writing with potentional bugs (oh, this is your FIRST extended proc?)

    What is you choice? Or what is your solution?

    And of course sorry for my English.

    Well to start off, I would suggest that many authors often start off by introducing basic concepts and simple examples, especially when they're writing a series of articles like this.

    Extended stored procedures are deprecated since SQL 2005, and I wouldn't recommend using them for a couple of very good reasons.

    The current issue with calculating running sums and moving averages in T-SQL is the lack of support for the full OVER clause with aggregate functions. When T-SQL catches up to the standard in that regard set-based running sums and moving averages, etc., may meet with your approval.

    There has been a lot of testing around this very subject, and the general result is that procedural code does work best in optimized procedural languages. A well-written SQL CLR stored procedure does in fact outrun an equivalent cursor in the tests I've seen. Adam Machanic has performed some testing in this area, you might check out his blog.

    Another solution to this specific problem might be to perform the calculation in your data flow if you are loading the data via an ETL tool.

    Mike C

  • I am usually a fan of your posts. However, these articles left a lot to be desired. The examples provided are trivial at best (please do take this the wrong way), and anyone that has been a SQL DBA for any decent length of time would know there are more efficient means of accomplishing what you did without cursors or while loops.

    I would like to see a real world example avoiding cursors or while loops for scenarios where complex business logic is invloved. Also, cursors can be designed to perform quite efficiently. Many factors come into play when designing efficient cursors. For example, Primary Keys, Indexing, Use of Memory Based Temp Tables, etc...

    A common approach I use is to

    A) Create a Memory Based Temp Table to Store Records to be Processed (e.g. DECLARE @Records TABLE (RecordID int) (This avoids the tempdb from coming into play)

    B) INSERT Records to be processed INTO the @Records temp table (e.g. SELECT RecordID FROM SomeTable)

    C) IMPLEMENT CURSOR or WHILE Loop here

  • jyurich (7/3/2010)


    I am usually a fan of your posts. However, these articles left a lot to be desired. The examples provided are trivial at best (please do take this the wrong way), and anyone that has been a SQL DBA for any decent length of time would know there are more efficient means of accomplishing what you did without cursors or while loops.

    I would like to see a real world example avoiding cursors or while loops for scenarios where complex business logic is invloved. Also, cursors can be designed to perform quite efficiently. Many factors come into play when designing efficient cursors. For example, Primary Keys, Indexing, Use of Memory Based Temp Tables, etc...

    A common approach I use is to

    A) Create a Memory Based Temp Table to Store Records to be Processed (e.g. DECLARE @Records TABLE (RecordID int) (This avoids the tempdb from coming into play)

    B) INSERT Records to be processed INTO the @Records temp table (e.g. SELECT RecordID FROM SomeTable)

    C) IMPLEMENT CURSOR or WHILE Loop here

    I'm curious... What have you done that requires such a cursor?

    And, by the way... table variables are not "memory only". Table Variables and Temp Tables can both start out in memory and they both use disk space from TempDB when they don't fit in memory.

    Here's an old but still appropriate Microsoft article on the subject...

    http://support.microsoft.com/default.aspx?scid=kb;en-us;305977&Product=sql2k

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 241 through 255 (of 316 total)

You must be logged in to reply to this topic. Login to reply