Although it is best practice to name columns, and especially in such article it would be advisable to do it correctly, it is not true that the insert will not work. The insert will work as it is written by the author.
The setting of @loop_variable is done only once in the entire process, so I doubt the proposed change would increase performance several fold. But, same as Bob, I would be very interested in some closer info about why replacing cursor with a WHILE loop increases performance by 75%. The loop in example generally does the same as the cursor, iterates row after row and for each row starts some procedure... so where is the huge gain? Better use of indexes? Reused query plan?
IMHO the main problem with cursors is, that some people tend to use them there, where it is possible to use a true set-based approach, i.e. update 1000 rows at once instead of updating one after another. I'm not that much sure about the shown example... it is one of the things where I think cursors aren't really too bad. Since I never had to use cursors, I prefer to avoid them altogether - but if you are familiar with them, this could be one of the situations where cursor is OK. I'm really curious what some of the more experienced posters will say...
Tks for that best article but, i have a question...
Well, i have a lot of procedures that use cursors running in background and the server has low performance all day long.
If we have more than 1.000.000 rows returned in my query, that stil is the best option?
(The comments are to the author of the article)
Have you tried to convert the cursor to a static cursor?
Because, in my opinion, that's what you are doing with a temporary table (defining the result set "A priori"). I have seen great improvements with this simple modification DECLARE curs_name CURSOR to DECLARE curs_name CURSOR STATIC (assuming you can ignore the modifications to the original result set).
I always try to achieve a set based solution, but if you are only trying to speedup that operation, with the least amount of change, sometimes it pays to play with the cursor options. Of course, with a temporary table, you can more easily migrate to a set based solution.
More problematic (at least, finding an elegant situation) is to use set operations when the tables use an identity column (especially using the identity of one table as the foreign key of another table).
Best regards,André Cardoso
You can also add PRIMARY KEY to the table definititon
DECLARE @item_table TABLE (primary_key INT IDENTITY(1,1) NOT NULL PRIMARY KEY, item_category_id INT, order_id INT )
The followings SELECT are better
SELECT @item_category_id = item_category_id ,@order_id = order_idFROM @item_table WHERE primary_key = @item_category_counter
I think it's worth making the point that this approach does nothing to combat what is fundamentally bad about cursors. All we have done here is created a cursor, without using the declare cursor syntax.
The article states:
<However, this kills the whole theory of set-based operations, which are inherently faster than their row based counterparts. The following is one way of converting your CURSOR statement using the WHILE keyword in such a scenario>
Yes, but this still isn't set based is it? It's still doing everything row by row, which is exactly what we want to avoid.
Much better to take a step back and see if you process many, or all of the purchase orders in a small number of set based sql statements, and try not to take such a procedural approach.
I'd also say that the 75% performance improvement is highly dubious. How can this be? Presumably, the work is being done within the usp_generate_purchase_order procedure. Are we really saying that the looping code was the bottleneck? If this process took 1 min originally, are we saying that by substituting the cursor for the while loop reduced it to 15 secs? I really don't think so.
I'm not saying that this construct should never be used. There are many cases when you might want to slow things down (archiving/purging/data moving for example) to aid concurrency and reduce the size of transactions, but this is not a good pattern to use in high performance database programming.