# Passing Parameters as (almost) 1, 2, and 3 Dimensional Arrays

• Comments posted to this topic are about the item Passing Parameters as (almost) 1, 2, and 3 Dimensional Arrays

--Jeff Moden

RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

Change is inevitable... Change for the better is not.

How to post code problems
How to Post Performance Problems
Create a Tally Function (fnTally)

• Good article (of course).

I decided to try an experiment on your big numbers CTE (mainly because I'm bored this morning).

First, I ran it exactly as presented in the article, and got these CPU times: 219, 219, 187, 203, 187 (5 runs).

Then I modified it the CTE to select from my BigNumbers table (100-million rows of overkill). Run times: 328, 328, 313, 312, 312

But when I modified the CTE to select from a cross join of Numbers (10-thousand rows) to Numbers, instead of sys.all_objects to sys.all_objects, I got these run times: 188, 219, 218, 235, 218

I also tried using a table with 100-thousand rows of Numbers: 188, 172, 172, 140, 157

Since we're looking for a way to have millions of rows, the 100-thousand row table won't do, but I wanted to test it to see the baseline.

Since I saw a pattern in this, whereby the smaller the base table, the faster the query (makes sense), I tried making a 1000-row SmallNumbers table, cross joining that to itself, and running from there. Run times: 203, 203, 203, 187, 172

Just for the sake of test-to-break, I also tried creating a table called TinyNumbers, with 101 rows in it, and then cross joining that 3 times. Results: 171, 219, 219, 219, 234

It got my best run-times and best functionality off a 2-CTE Numbers structure, as follows:

`;WITH`

`Multiplier (Mult) as`

`(select top (len(@parameter)/100000) row_number() over (order by number)`

`from dbo.tinynumbers`

`union`

`select 1`

`from dbo.tinynumbers`

`where len(@parameter)/100000 < 1),`

`cteTally (Number) AS`

`(select top (len(@parameter)-1)`

`row_number() over (order by n1.number)`

`from dbo.smallnumbers n1`

`cross join dbo.smallnumbers n2`

`cross join multiplier)`

`SELECT ROW_NUMBER() OVER (ORDER BY Number) AS Number,`

`SUBSTRING(@Parameter,Number+1,CHARINDEX(',',@Parameter,Number+1)-Number-1) AS Value`

`FROM cteTally`

`WHERE SUBSTRING(@Parameter,Number,1) = ','`

This one adds about .01 second of CPU time because of the Multiplier CTE, but it also means it can handle up to 1-billion character inputs.

It appears, after these tests, that it's faster to run a cross-join in a CTE (builds a worktable in tempdb) than to query a large numbers table, but that a 1000-row base is faster than a larger base (like sys.all_objects). The speed difference for the smaller cross-join is slight, but measurable, and might be worth it on a heavily loaded system.

You might also notice that I modified the Top () portion of the cteTally, and got rid of the Len part of the Where clause in the outer query. Subtracting 1 from the len makes that part of the Where redudant, and cut about .03 off the run-time (172, 173, 171, 181, 173).

When I got rid of the output (assigned the select value to a variable instead of returning it as a result set), and increased the number of elements in the string to 80,000, this version took 1.8 seconds.

(Yeah, like I said, I'm bored this morning.)

- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC

"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

• Good stuff, Gus! Looks like I have a couple of more things to play with. Glad you were bored this morning! π

--Jeff Moden

RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

Change is inevitable... Change for the better is not.

How to post code problems
How to Post Performance Problems
Create a Tally Function (fnTally)

• Awesome one Jeff. Another bomb. I just wonder how much pain you take to write these articles? Excellent one. Keep it up......:)

• Anirban Paul (5/23/2008)

Awesome one Jeff. Another bomb. I just wonder how much pain you take to write these articles? Excellent one. Keep it up......:)

Thanks for the great feedback and the question, too, Arniban!

I'm not what I'd call a "prolific" writer... I agonize over everything... form, fit, function, correctness and readability of code, order of presentation, etc. The code is especially important... I hate it when I use someone else's code and it doesn't work as advertised or it's difficult to read and has no embedded documentation. The code examples I create are typically very simple, but they have to follow my own rules... "Make it work, make it fast, make it pretty, and it ain't done 'til it's pretty."

I also hate it when someone's graphics are too small to read and, when you zoom in on them, too blurry to easily read.

This last article, if I had to guess, took me somewhere between 12 and 16 hours to write. A more prolific writer could probably have banged it out in an hour or two and still done a heck of a good job.

Heh... just imagine me writing a book... 11 pages every 2 days... it would take an eon for me to write one. I don't know how some of these folks can turn out a couple of 500 or 1,000 page books a year... I envy them, quite a bit.

--Jeff Moden

RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

Change is inevitable... Change for the better is not.

How to post code problems
How to Post Performance Problems
Create a Tally Function (fnTally)

• Jeff Moden (5/23/2008)

Heh... just imagine me writing a book... 11 pages every 2 days... it would take an eon for me to write one. I don't know how some of these folks can turn out a couple of 500 or 1,000 page books a year... I envy them, quite a bit.

Hmm Jeff. Food for thought. If you can crank 11 pages in 2 days:

11 pages every 2 days = 5.5pages per day.

So 750 pages ==> 27 weeks at 5 days a week (and we all know you don't walk away on weekends...).

So - I take it that means we can be expecting 2 books a year from you? :w00t:

----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

• Thanks Jeff, for some more COOL STUFF.

Most days I don't have time to read the more lengthy articles, but I did today, and I've been meaning to look into these Tally thingies you keep talking about, and see how they actually have some usefulness.

That's some great, and efficient code, and some of it is now in my "library".

Thanks again,

Tom Garth
Vertical Solutions[/url]

"There are three kinds of men. The one that learns by reading. The few who learn by observation. The rest of them have to pee on the electric fence for themselves." -- Will Rogers
• That's some of the best compliments I could hope for, Tom... someone adding this type of code to their library. Thanks!

If you haven't done so already, here's a recent article I wrote about how the Tally table works to replace loops, in some cases...

http://www.sqlservercentral.com/articles/TSQL/62867/

... and, if you get really bored on this fine 3 day weekend, here's all my stuff, so far. The one on running balances has some pretty neat stuff that can be used for other, surprising things...

http://www.sqlservercentral.com/Authors/Articles/Jeff_Moden/80567/

What're even more worth reading are the great suggestions some folks wrote about in the discussions that followed each article and the wonderful code examples some folks submitted as a part of the discussions.

--Jeff Moden

RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

Change is inevitable... Change for the better is not.

How to post code problems
How to Post Performance Problems
Create a Tally Function (fnTally)

• Matt Miller (5/23/2008)

Jeff Moden (5/23/2008)

Heh... just imagine me writing a book... 11 pages every 2 days... it would take an eon for me to write one. I don't know how some of these folks can turn out a couple of 500 or 1,000 page books a year... I envy them, quite a bit.

Hmm Jeff. Food for thought. If you can crank 11 pages in 2 days:

11 pages every 2 days = 5.5pages per day.

So 750 pages ==> 27 weeks at 5 days a week (and we all know you don't walk away on weekends...).

So - I take it that means we can be expecting 2 books a year from you? :w00t:

Oh heck no... those are two 8 hour days... gotta earn a living and a book isn't likely to be a great source of income... HEH! If you don't believe that, look at Celko... he's got a number of books on the market and he still dresses funny... π

--Jeff Moden

RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

Change is inevitable... Change for the better is not.

How to post code problems
How to Post Performance Problems
Create a Tally Function (fnTally)

• Jeff Moden (5/23/2008)

Oh heck no... those are two 8 hour days... gotta earn a living and a book isn't likely to be a great source of income... HEH! If you don't believe that, look at Celko... he's got a number of books on the market and he still dresses funny... π

The funny clothes I can deal with. The tin foil hat on the other hand...:)

----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

• This last article, if I had to guess, took me somewhere between 12 and 16 hours to write. A more prolific writer could probably have banged it out in an hour or two and still done a heck of a good job.

Still if you are not profilic writic then we should change the profilic writer's definition................;)

• Matt Miller (5/23/2008)

The funny clothes I can deal with. The tin foil hat on the other hand...:)

Heh! C'mon... I don't wear the hat in public... and no one can see the lead cods piece... :hehe:

--Jeff Moden

RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

Change is inevitable... Change for the better is not.

How to post code problems
How to Post Performance Problems
Create a Tally Function (fnTally)

• Jeff Moden (5/24/2008)

Matt Miller (5/23/2008)

The funny clothes I can deal with. The tin foil hat on the other hand...:)

Heh! C'mon... I don't wear the hat in public... and no one can see the lead cods piece... :hehe:

now there goes a visual I didn't need. I am going to have to double myt alcohol intake tomorrow just to flush that right out...:w00t:

----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

• This 2D stuff is excellent for normalizing 1NF (first normal form) violations like '123^12|456^45'|789^12|945^34'

2D array parsing without table variables or temp tables! It is using Itzik Ben-Gan's parsing algorithm that relies on a table of numbers (counter / tally / nums). My version of the 2D enhancement uses CROSS APPLY so it doesn't work in SQL Server 2000.

2D 'Table' version - outputs vertical-ized data only; faster but not very useful on 2D data:

`--Normal VarChar version`

`CREATE FUNCTION dbo.fn_DelimitToTable_2D`

`(`

`@String VarChar(8000),`

`@Delimiter1 VarChar(1),`

`@Delimiter2 VarChar(1)`

`) RETURNS TABLE`

`AS`

`RETURN`

`(`

`SELECT Counter2nd.Value AS Value`

`FROM`

`(`

`SELECT`

`SUBSTRING(@String+@Delimiter1, PK_CountID, CHARINDEX(@Delimiter1, @String+@Delimiter1, PK_CountID)-PK_CountID) AS Value`

`FROM dbo.counter`

`WHERE PK_CountID >0 AND PK_CountID<LEN(@String)+LEN(@Delimiter1) AND SubString(@Delimiter1 + @String + @Delimiter1, PK_CountID, 1)=@Delimiter1`

`) AS Counter1st`

`CROSS APPLY (`

`SELECT`

`SUBSTRING(Counter1st.Value+@Delimiter2, PK_CountID, CHARINDEX(@Delimiter2, Counter1st.Value+@Delimiter2, PK_CountID)-PK_CountID) AS Value`

`FROM dbo.counter`

`WHERE PK_CountID >0 AND PK_CountID<LEN(Counter1st.Value)+LEN(@Delimiter2) AND SubString(@Delimiter2 + Counter1st.Value + @Delimiter2, PK_CountID, 1)=@Delimiter2`

`) AS Counter2nd`

`)`

`GO`

`--Integer casting version when output is used to join to integer PK/FK columns.`

`CREATE FUNCTION dbo.fn_DelimitToIntTable_2D`

`(`

`@String VarChar(8000),`

`@Delimiter1 VarChar(1),`

`@Delimiter2 VarChar(1)`

`) RETURNS TABLE`

`AS`

`RETURN`

`(`

`SELECT CONVERT(int, Counter2nd.Value) AS PK_IntID`

`FROM`

`(`

`SELECT`

`SUBSTRING(@String+@Delimiter1, PK_CountID, CHARINDEX(@Delimiter1, @String+@Delimiter1, PK_CountID)-PK_CountID) AS Value`

`FROM dbo.counter`

`WHERE PK_CountID >0 AND PK_CountID<LEN(@String)+LEN(@Delimiter1) AND SubString(@Delimiter1 + @String + @Delimiter1, PK_CountID, 1)=@Delimiter1`

`) AS Counter1st`

`CROSS APPLY (`

`SELECT`

`SUBSTRING(Counter1st.Value+@Delimiter2, PK_CountID, CHARINDEX(@Delimiter2, Counter1st.Value+@Delimiter2, PK_CountID)-PK_CountID) AS Value`

`FROM dbo.counter`

`WHERE PK_CountID >0 AND PK_CountID<LEN(Counter1st.Value)+LEN(@Delimiter2) AND SubString(@Delimiter2 + Counter1st.Value + @Delimiter2, PK_CountID, 1)=@Delimiter2`

`) AS Counter2nd`

`)`

`GO`

'Array' version - outputs indexer also (more overhead):

`--Normal VarChar version`

`CREATE FUNCTION dbo.fn_DelimitToArray_2D`

`(`

`@String VarChar(8000),`

`@Delimiter1 VarChar(1),`

`@Delimiter2 VarChar(1)`

`) RETURNS TABLE`

`AS`

`RETURN`

`(`

`SELECT Counter1st.Pos AS RowPos, Counter2nd.Pos AS ColPos, Counter2nd.Value AS Value`

`FROM`

`(`

`SELECT`

`PK_CountID - LEN(REPLACE(LEFT(@String, PK_CountID-1), @Delimiter1, '')) AS Pos,`

`SUBSTRING(@String+@Delimiter1, PK_CountID, CHARINDEX(@Delimiter1, @String+@Delimiter1, PK_CountID)-PK_CountID) AS Value`

`FROM dbo.counter`

`WHERE PK_CountID >0 AND PK_CountID<LEN(@String)+LEN(@Delimiter1) AND SubString(@Delimiter1 + @String + @Delimiter1, PK_CountID, 1)=@Delimiter1`

`) AS Counter1st`

`CROSS APPLY (`

`SELECT`

`PK_CountID - LEN(REPLACE(LEFT(Counter1st.Value, PK_CountID-1), @Delimiter2, '')) AS Pos,`

`SUBSTRING(Counter1st.Value+@Delimiter2, PK_CountID, CHARINDEX(@Delimiter2, Counter1st.Value+@Delimiter2, PK_CountID)-PK_CountID) AS Value`

`FROM dbo.counter`

`WHERE PK_CountID >0 AND PK_CountID<LEN(Counter1st.Value)+LEN(@Delimiter2) AND SubString(@Delimiter2 + Counter1st.Value + @Delimiter2, PK_CountID, 1)=@Delimiter2`

`) AS Counter2nd`

`)`

`GO`

`--Integer casting version when output is used to join to integer PK/FK columns.`

`CREATE FUNCTION dbo.fn_DelimitToIntArray_2D`

`(`

`@String VarChar(8000),`

`@Delimiter1 VarChar(1),`

`@Delimiter2 VarChar(1)`

`) RETURNS TABLE`

`AS`

`RETURN`

`(`

`SELECT Counter1st.Pos AS RowPos, Counter2nd.Pos AS ColPos, CONVERT(int, Counter2nd.value) AS PK_IntID`

`FROM`

`(`

`SELECT`

`PK_CountID - LEN(REPLACE(LEFT(@String, PK_CountID-1), @Delimiter1, '')) AS Pos,`

`SUBSTRING(@String+@Delimiter1, PK_CountID, CHARINDEX(@Delimiter1, @String+@Delimiter1, PK_CountID)-PK_CountID) AS value`

`FROM dbo.counter`

`WHERE PK_CountID >0 AND PK_CountID<LEN(@String)+LEN(@Delimiter1) AND SubString(@Delimiter1 + @String + @Delimiter1, PK_CountID, 1)=@Delimiter1`

`) AS Counter1st`

`CROSS APPLY (`

`SELECT`

`PK_CountID - LEN(REPLACE(LEFT(Counter1st.value, PK_CountID-1), @Delimiter2, '')) AS Pos,`

`SUBSTRING(Counter1st.value+@Delimiter2, PK_CountID, CHARINDEX(@Delimiter2, Counter1st.Value+@Delimiter2, PK_CountID)-PK_CountID) AS value`

`FROM dbo.counter`

`WHERE PK_CountID >0 AND PK_CountID<LEN(Counter1st.value)+LEN(@Delimiter2) AND SubString(@Delimiter2 + Counter1st.value + @Delimiter2, PK_CountID, 1)=@Delimiter2`

`) AS Counter2nd`

`)`

`GO`

For those of you who don't have Itzik Ben-Gan's Inside SQL Server 2005 T-SQL books or been to any of his conference sessions (the books are a lot cheaper), here are 1D versions:

'Table' version - ordinal postion stripped out for speed; Great for stored-procedure-izing IN() clauses - WHERE id IN (1,2,3,4):

`--Normal VarChar version`

`CREATE FUNCTION dbo.fn_DelimitToTable`

`(`

`@String VarChar(8000),`

`@Delimiter VarChar(1)`

`) RETURNS TABLE`

`AS`

`RETURN`

`(`

`SELECT SUBSTRING(@String+@Delimiter, PK_CountID, CHARINDEX(@Delimiter, @String+@Delimiter, PK_CountID)-PK_CountID) AS Value`

`FROM dbo.counter`

`WHERE PK_CountID >0 AND PK_CountID<LEN(@String)+LEN(@Delimiter) AND SubString(@Delimiter + @String + @Delimiter, PK_CountID, 1)=@Delimiter`

`)`

`GO`

`--Integer casting version when output is used to join to integer PK/FK columns.`

`CREATE FUNCTION dbo.fn_DelimitToIntTable`

`(`

`@String VarChar(8000),`

`@Delimiter VarChar(1)`

`) RETURNS TABLE`

`AS`

`RETURN`

`(`

`SELECT CONVERT(int, SUBSTRING(@String+@Delimiter, PK_CountID, CHARINDEX(@Delimiter, @String+@Delimiter, PK_CountID)-PK_CountID)) AS PK_IntID`

`FROM dbo.counter`

`WHERE PK_CountID >0 AND PK_CountID<LEN(@String)+LEN(@Delimiter) AND SubString(@Delimiter + @String + @Delimiter, PK_CountID, 1)=@Delimiter`

`)`

`GO`

'Array' version - with position indexer - good for index change scripts where column-order matters:

`--Normal VarChar version`

`CREATE FUNCTION dbo.fn_DelimitToArray`

`(`

`@String VarChar(8000),`

`@Delimiter VarChar(1)`

`) RETURNS TABLE`

`AS`

`RETURN`

`(`

`SELECT`

`PK_CountID - LEN(REPLACE(LEFT(@String, PK_CountID-1), @Delimiter, '')) AS Pos,`

`SUBSTRING(@String+@Delimiter, PK_CountID, CHARINDEX(@Delimiter, @String+@Delimiter, PK_CountID)-PK_CountID) AS Value`

`FROM dbo.counter`

`WHERE PK_CountID >0 AND PK_CountID<LEN(@String)+LEN(@Delimiter) AND SubString(@Delimiter + @String + @Delimiter, PK_CountID, 1)=@Delimiter`

`)`

`GO`

`--Integer casting version when output is used to join to integer PK/FK columns.`

`CREATE FUNCTION dbo.fn_DelimitToIntArray`

`(`

`@String VarChar(8000),`

`@Delimiter VarChar(1)`

`) RETURNS TABLE`

`AS`

`RETURN`

`(`

`SELECT`

`PK_CountID - LEN(REPLACE(LEFT(@String, PK_CountID-1), @Delimiter, '')) AS Pos,`

`CONVERT(int, SUBSTRING(@String+@Delimiter, PK_CountID, CHARINDEX(@Delimiter, @String+@Delimiter, PK_CountID)-PK_CountID)) AS PK_IntID`

`FROM dbo.counter`

`WHERE PK_CountID >0 AND PK_CountID<LEN(@String)+LEN(@Delimiter) AND SubString(@Delimiter + @String + @Delimiter, PK_CountID, 1)=@Delimiter`

`)`

As for logical reads on the nums / tally / counter table:

SQL server 2005 can fit 622 numbers per page if it is clustered. That drops to 299 if it is a heap. SQL Server 2000 can fit 620 numbers per page clustered.

1 I/O per hit guaranteed: 299-number heap (seek or scan; only tested in 2005)

2 I/Os per hit guaranteed (seek or scan): 622 number clustered (620 for 2000)

Fully packed 2-level clustered index for a 2 I/O minimum per seek: 386,884 numbers (384,400 for 2000)

Make sure you use a 100% fill facter (the data shouldn't ever change), and after populating the tables with data, you do a rebuild:

ALTER INDEX ALL ON Counter REBUILD WITH (FillFactor=100) for SQL Server 2005

DBCC DBREINDEX (Counter,'PK_C_IX__Counter__CountID',100) for SQL Server 2000

I usually use both a 'small' version and a 'standard' version of the table of numbers (counter / nums / tally). Never needed the 'big' version yet - a fully packed 3-level clustered index with 240,641,848 numbers (238,328,000 for SQL2000).

Here is my counter table building script for SQL Server 2005 and 2000; it runs in 4 seconds and allows or having a portion of your numbers being negative. @MaxPositive and @ClusteredRowsPerPage are the hard-coded controlling parameters.

1-Level, 2-Level, and 3-Level (commented) Counter / Tally / Nums table builder SQL Server 2005:

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`--DDL`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`SET NOCOUNT ON`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`IF EXISTS (SELECT * FROM sys.tables WHERE name='CounterSmall' AND schema_id=1) DROP TABLE dbo.CounterSmall`

`IF EXISTS (SELECT * FROM sys.tables WHERE name='Counter' AND schema_id=1) DROP TABLE dbo.Counter`

`--IF EXISTS (SELECT * FROM sys.tables WHERE name='CounterBig' AND schema_id=1) DROP TABLE dbo.CounterBig`

`GO`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`CREATE TABLE dbo.CounterSmall`

`(`

`PK_CountID int NOT NULL,`

`CONSTRAINT PK_C_IX__CounterSmall__CountID PRIMARY KEY CLUSTERED (PK_CountID) WITH FILLFACTOR=100`

`)`

`CREATE TABLE dbo.Counter`

`(`

`PK_CountID int NOT NULL,`

`CONSTRAINT PK_C_IX__Counter__CountID PRIMARY KEY CLUSTERED (PK_CountID) WITH FILLFACTOR=100`

`)`

`/*`

`CREATE TABLE dbo.CounterBig`

`(`

`PK_CountID int NOT NULL,`

`CONSTRAINT PK_C_IX__CounterBig__CountID PRIMARY KEY CLUSTERED (PK_CountID) WITH FILLFACTOR=100`

`)`

`*/`

`GO`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`--Counter SQL 2005`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`DECLARE @Power int`

`DECLARE @HeapRowsPerPage int`

`DECLARE @ClusteredRowsPerPage int`

`DECLARE @MaxRows int`

`DECLARE @MaxPositive int`

`DECLARE @MaxNegative int`

`DECLARE @OldMaxNegative int`

`SET @ClusteredRowsPerPage=622`

`SET @HeapRowsPerPage=299`

`SET @MaxPositive=621`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`SET @MaxRows=@ClusteredRowsPerPage`

`SET @MaxPositive=@MaxPositive-1`

`SET @OldMaxNegative=0`

`SET @MaxNegative=@MaxRows-@MaxPositive-@OldMaxNegative`

`SET @Power=1`

`PRINT 'CounterSmall: ' + CONVERT(VarChar(10), @MaxNegative*-1+1) + ' to ' + CONVERT(VarChar(10), @MaxPositive) + ' - ' + CONVERT(VarChar(10), @MaxRows) + ' Rows - 1-Level Clustered Index'`

`--SELECT @MaxNegative AS MaxNegative, @MaxPositive AS MaxPositive, @OldMaxNegative AS OldMaxNegative, @Power AS Power, @MaxRows AS MaxRows`

`TRUNCATE TABLE CounterSmall`

`BEGIN TRANSACTION`

`/*`

`INSERT INTO CounterSmall WITH (TABLOCKX) (PK_CountID)`

`SELECT PK_CountID-@MaxNegative`

`FROM dbo.fn_Numbers(@MaxRows)`

`*/`

`INSERT INTO CounterSmall WITH (TABLOCKX) (PK_CountID) VALUES (1-@MaxNegative)`

`WHILE @Power<=@MaxRows`

`BEGIN`

`INSERT INTO CounterSmall WITH (TABLOCKX) (PK_CountID)`

`SELECT @Power+PK_CountID FROM CounterSmall`

`WHERE @Power+PK_CountID<=@MaxPositive`

`SET @Power=@Power*2`

`END`

`COMMIT`

`ALTER INDEX ALL ON CounterSmall REBUILD WITH (FillFactor=100)`

`UPDATE STATISTICS CounterSmall WITH FULLSCAN`

`--SELECT * FROM CounterSmall`

`--*=*=*=*=*=*=*=*=*=*=`

`SET @Power=@ClusteredRowsPerPage`

`SET @MaxRows=@Power*@ClusteredRowsPerPage`

`SET @OldMaxNegative=@MaxNegative+@OldMaxNegative`

`SET @MaxPositive=(@MaxPositive+1)*@ClusteredRowsPerPage`

`SET @MaxNegative=@MaxRows-@MaxPositive-@OldMaxNegative`

`PRINT 'Counter: ' + CONVERT(VarChar(10), @MaxNegative*-1-@OldMaxNegative+1) + ' to ' + CONVERT(VarChar(10), @MaxPositive) + ' - ' + CONVERT(VarChar(10), @MaxRows) + ' Rows - 2-Level Clustered Index'`

`--SELECT @MaxNegative AS MaxNegative, @MaxPositive AS MaxPositive, @OldMaxNegative AS OldMaxNegative, @Power AS Power, @MaxRows AS MaxRows`

`TRUNCATE TABLE Counter`

`BEGIN TRANSACTION`

`INSERT INTO Counter WITH (TABLOCKX) (PK_CountID)`

`SELECT PK_CountID-@MaxNegative FROM CounterSmall`

`WHILE @Power<=@MaxRows`

`BEGIN`

`INSERT INTO Counter WITH (TABLOCKX) (PK_CountID)`

`SELECT @Power+PK_CountID FROM Counter`

`WHERE @Power+PK_CountID<=@MaxPositive`

`SET @Power=@Power*2`

`END`

`COMMIT`

`ALTER INDEX ALL ON Counter REBUILD WITH (FillFactor=100)`

`UPDATE STATISTICS Counter WITH FULLSCAN`

`--SELECT * FROM Counter ORDER BY PK_CountID`

`--*=*=*=*=*=*=*=*=*=*=`

`/*`

`SET @Power=@ClusteredRowsPerPage*@ClusteredRowsPerPage`

`SET @MaxRows=@Power*(@ClusteredRowsPerPage-2)`

`SET @OldMaxNegative=@MaxNegative+@OldMaxNegative`

`SET @MaxPositive=(@MaxPositive+1)*@ClusteredRowsPerPage`

`SET @MaxNegative=@MaxRows-@MaxPositive-@OldMaxNegative`

`PRINT 'CounterBig: ' + CONVERT(VarChar(10), @MaxNegative*-1-@OldMaxNegative+1) + ' to ' + CONVERT(VarChar(10), @MaxPositive) + ' - ' + CONVERT(VarChar(10), @MaxRows) + ' Rows - 3-Level Clustered Index'`

`--SELECT @MaxNegative AS MaxNegative, @MaxPositive AS MaxPositive, @OldMaxNegative AS OldMaxNegative, @Power AS Power, @MaxRows AS MaxRows`

`TRUNCATE TABLE CounterBig`

`UPDATE STATISTICS CounterBig WITH FULLSCAN, NORECOMPUTE`

`BEGIN TRANSACTION`

`INSERT INTO CounterBig WITH (TABLOCKX) (PK_CountID)`

`SELECT PK_CountID-@MaxNegative FROM Counter`

`WHILE @Power<=@MaxRows`

`BEGIN`

`INSERT INTO CounterBig WITH (TABLOCKX) (PK_CountID)`

`SELECT @Power+PK_CountID FROM CounterBig`

`WHERE @Power+PK_CountID<=@MaxPositive`

`SET @Power=@Power*2`

`END`

`COMMIT`

`ALTER INDEX ALL ON CounterBig REBUILD WITH (FillFactor=100)`

`UPDATE STATISTICS CounterBig WITH FULLSCAN`

`--SELECT * FROM CounterBig ORDER BY PK_CountID`

`*/`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`SELECT * FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('CounterSmall'), NULL, NULL, 'DETAILED')`

`SELECT * FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('Counter'), NULL, NULL, 'DETAILED')`

`--SELECT * FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('CounterBig'), NULL, NULL, 'DETAILED')`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`GO`

1-Level, 2-Level, and 3-Level (commented) Counter / Tally / Nums table builder SQL Server 2000:

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`--DDL`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`SET NOCOUNT ON`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`IF EXISTS (SELECT * FROM sysobjects WHERE name='CounterSmall' AND uid=1 AND xtype='u') DROP TABLE dbo.CounterSmall`

`IF EXISTS (SELECT * FROM sysobjects WHERE name='Counter' AND uid=1 AND xtype='u') DROP TABLE dbo.Counter`

`--IF EXISTS (SELECT * FROM sysobjects WHERE name='CounterBig' AND uid=1 AND xtype='u') DROP TABLE dbo.CounterBig`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`CREATE TABLE dbo.CounterSmall`

`(`

`PK_CountID int NOT NULL,`

`CONSTRAINT PK_C_IX__CounterSmall__CountID PRIMARY KEY CLUSTERED (PK_CountID) WITH FILLFACTOR=100`

`)`

`CREATE TABLE dbo.Counter`

`(`

`PK_CountID int NOT NULL,`

`CONSTRAINT PK_C_IX__Counter__CountID PRIMARY KEY CLUSTERED (PK_CountID) WITH FILLFACTOR=100`

`)`

`/*`

`CREATE TABLE dbo.CounterBig`

`(`

`PK_CountID int NOT NULL,`

`CONSTRAINT PK_C_IX__CounterBig__CountID PRIMARY KEY CLUSTERED (PK_CountID) WITH FILLFACTOR=100`

`)`

`*/`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`--Counter SQL 2000`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`DECLARE @Power int`

`DECLARE @HeapRowsPerPage int`

`DECLARE @ClusteredRowsPerPage int`

`DECLARE @MaxRows int`

`DECLARE @MaxPositive int`

`DECLARE @MaxNegative int`

`DECLARE @OldMaxNegative int`

`SET @ClusteredRowsPerPage=620`

`SET @HeapRowsPerPage=299`

`SET @MaxPositive=619`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`SET @MaxRows=@ClusteredRowsPerPage`

`SET @MaxPositive=@MaxPositive-1`

`SET @OldMaxNegative=0`

`SET @MaxNegative=@MaxRows-@MaxPositive-@OldMaxNegative`

`SET @Power=1`

`PRINT 'CounterSmall: ' + CONVERT(VarChar(10), @MaxNegative*-1+1) + ' to ' + CONVERT(VarChar(10), @MaxPositive) + ' - ' + CONVERT(VarChar(10), @MaxRows) + ' Rows - 1-Level Clustered Index'`

`--SELECT @MaxNegative AS MaxNegative, @MaxPositive AS MaxPositive, @OldMaxNegative AS OldMaxNegative, @Power AS Power, @MaxRows AS MaxRows`

`TRUNCATE TABLE CounterSmall`

`BEGIN TRANSACTION`

`INSERT INTO CounterSmall WITH (TABLOCKX) (PK_CountID) VALUES (1-@MaxNegative)`

`WHILE @Power<=@MaxRows`

`BEGIN`

`INSERT INTO CounterSmall WITH (TABLOCKX) (PK_CountID)`

`SELECT @Power+PK_CountID FROM CounterSmall`

`WHERE @Power+PK_CountID<=@MaxPositive`

`SET @Power=@Power*2`

`END`

`COMMIT`

`DBCC DBREINDEX (CounterSmall,'PK_C_IX__CounterSmall__CountID',100)`

`UPDATE STATISTICS CounterSmall WITH FULLSCAN`

`--SELECT * FROM CounterSmall`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`SET @Power=@ClusteredRowsPerPage`

`SET @MaxRows=@Power*@ClusteredRowsPerPage`

`SET @OldMaxNegative=@MaxNegative+@OldMaxNegative`

`SET @MaxPositive=(@MaxPositive+1)*@ClusteredRowsPerPage`

`SET @MaxNegative=@MaxRows-@MaxPositive-@OldMaxNegative`

`PRINT 'Counter: ' + CONVERT(VarChar(10), @MaxNegative*-1-@OldMaxNegative+1) + ' to ' + CONVERT(VarChar(10), @MaxPositive) + ' - ' + CONVERT(VarChar(10), @MaxRows) + ' Rows - 2-Level Clustered Index'`

`--SELECT @MaxNegative AS MaxNegative, @MaxPositive AS MaxPositive, @OldMaxNegative AS OldMaxNegative, @Power AS Power, @MaxRows AS MaxRows`

`TRUNCATE TABLE Counter`

`BEGIN TRANSACTION`

`INSERT INTO Counter WITH (TABLOCKX) (PK_CountID)`

`SELECT PK_CountID-@MaxNegative FROM CounterSmall`

`WHILE @Power<=@MaxRows`

`BEGIN`

`INSERT INTO Counter WITH (TABLOCKX) (PK_CountID)`

`SELECT @Power+PK_CountID FROM Counter`

`WHERE @Power+PK_CountID<=@MaxPositive`

`SET @Power=@Power*2`

`END`

`COMMIT`

`DBCC DBREINDEX (Counter,'PK_C_IX__Counter__CountID',100)`

`UPDATE STATISTICS Counter WITH FULLSCAN`

`--SELECT * FROM Counter ORDER BY PK_CountID`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`/*`

`SET @Power=@ClusteredRowsPerPage*@ClusteredRowsPerPage`

`SET @MaxRows=@Power*(@ClusteredRowsPerPage-2)`

`SET @OldMaxNegative=@MaxNegative+@OldMaxNegative`

`SET @MaxPositive=(@MaxPositive+1)*@ClusteredRowsPerPage`

`SET @MaxNegative=@MaxRows-@MaxPositive-@OldMaxNegative`

`PRINT 'CounterBig: ' + CONVERT(VarChar(10), @MaxNegative*-1-@OldMaxNegative+1) + ' to ' + CONVERT(VarChar(10), @MaxPositive) + ' - ' + CONVERT(VarChar(10), @MaxRows) + ' Rows - 3-Level Clustered Index'`

`--SELECT @MaxNegative AS MaxNegative, @MaxPositive AS MaxPositive, @OldMaxNegative AS OldMaxNegative, @Power AS Power, @MaxRows AS MaxRows`

`TRUNCATE TABLE CounterBig`

`UPDATE STATISTICS CounterBig WITH FULLSCAN, NORECOMPUTE`

`BEGIN TRANSACTION`

`INSERT INTO CounterBig WITH (TABLOCKX) (PK_CountID)`

`SELECT PK_CountID-@MaxNegative FROM Counter`

`WHILE @Power<=@MaxRows`

`BEGIN`

`INSERT INTO CounterBig WITH (TABLOCKX) (PK_CountID)`

`SELECT @Power+PK_CountID FROM CounterBig`

`WHERE @Power+PK_CountID<=@MaxPositive`

`SET @Power=@Power*2`

`END`

`COMMIT`

`DBCC DBREINDEX (Counter,'PK_C_IX__CounterBig__CountID',100)`

`UPDATE STATISTICS CounterBig WITH FULLSCAN`

`--SELECT * FROM CounterBig ORDER BY PK_CountID`

`*/`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

`DBCC SHOWCONTIG (CounterSmall) WITH ALL_LEVELS, TABLERESULTS`

`DBCC SHOWCONTIG (Counter) WITH ALL_LEVELS, TABLERESULTS`

`--DBCC SHOWCONTIG (CounterBig) WITH ALL_LEVELS, TABLERESULTS`

`--*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=`

I have big versions and two-column versions as well, but the post is already too big. The big version gracefully can handle more than hundreds of thousands of characters because it splices into 8000 character blocks. More code, no longer an inline table-valued function (inline table-valued functions are processed as derived tables / views behind the scenes and are much faster), but it is faster than VarChar(max) and works in SQL Server 2000 (if the string input is text instead of VarChar(max)) and never uses more than 8000 numbers.

I have had other uses for a table of numbers, particularly reporting involving date-ranges and you want to show a date-range-block even if there is no data with a date within that date-range block.

• Howdy folks,

Someone who wishes to remain anonymous, send me an email asking the following questions related to this article. I've not worked with SQL Server 2008 (shoot, I'm just getting up to speed on 2k5) so I can't answer them... sure could use some help if you have a minute. Here's the questions I was asked...

1. (concerning 2k8) One feature, I believe I saw, was the ability to pass multidimensional arrays in the procedures. Is it as easy as setting a β@myTable As tableβ for a passed in parameter to a stored proc?

2. ... since table variables are already used in stored procedures, is this a feature more valuable in Visual Studio and should be discussed in that arena?

3. As a side note: I have heard Microsoft purchased Dundas reports and believe have seen it is bundled in Reporting Services 2008. If I begin working with the community releases of SQL 2008, will those reports be available already?

Thanks for any information you may have on these questions.

--Jeff Moden

RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

Change is inevitable... Change for the better is not.