To answer your question, there will be no difference in performance or resource usage. The only difference is that the first rendition could be created as a stored procedure and the second cannot because of the "GO" batch separators.
Shifting gears a bit, the real key here is to figure out why you have so much fragmentation and fix it especially on HEAPs and CLUSTERED INDEXes. Such massive fragmentation on HEAPs is normally due to ExpAnsive Updates. On Clustered Indexes, such massive fragmentation is usually caused by ExpAnsive Updates and/or Out-of-Order Inserts.
Another key is to determine if the 99% fragmentation is actually causing you performance issues. For mostly single row lookups, logical fragmentation just won't matter. You also need to look at what people refer to as "Physical Fragmentation", which is a misnomer for "Page Density" because it can be a VERY big deal in the form of totally wasted memory and disk space.
is pronounced "ree-bar
" and is a "Modenism
" for R
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
"If you think its expensive to hire a professional to do the job, wait until you hire an amateur."--Red Adair
"Change is inevitable... change for the better is not."
When you put the right degree of spin on it, the number 3|8
is also a glyph that describes the nature of a DBAs job. 😉
How to post code problems
Create a Tally Function (fnTally)