November 22, 2010 at 7:35 am
Hi Guys,
I'm writing a rather big management information system, a monthly stored procedure will be run to generate lots of numbers for 26 pages of size 7 helvetica pdfs.
The best way I can think of for generating this report is to hold each number in a variable.
I think I'm going to end up with close to 2000 FLOAT variables.
is there a limit to how many variables I can declare in one stored proc?
if I have a stored proc called foo, with 2000 variables in it, and I call it 12 times in succession in a stored proc called bar, will it crash???
Thanks!
Ben
^ Thats me!
----------------------------------------
01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010
----------------------------------------
November 22, 2010 at 7:41 am
to be precise I need 1730 variables of data type int to produce this report
Ben
^ Thats me!
----------------------------------------
01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010
----------------------------------------
November 22, 2010 at 8:04 am
from my old notes, SQL2000 had these limits. 2005/2008 might be more now.
ยทUDF can have upto 1023 input parameters, Stored Procedure can have upto 2100 input parameters.
Lowell
November 22, 2010 at 8:08 am
Thanks lowell.
These aren't input parameters but rather individual variables created like DECLARE @fYetAnotherDamnedNumber AS FLOAT
I'm going crosseyed looking at so many variables!
Ben
^ Thats me!
----------------------------------------
01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010
----------------------------------------
November 22, 2010 at 8:48 am
BenWard (11/22/2010)
Thanks lowell.These aren't input parameters but rather individual variables created like DECLARE @fYetAnotherDamnedNumber AS FLOAT
I'm going crosseyed looking at so many variables!
ok, ASFAIK, ither than memory/disk constraints, there's no limit inside the body of a procedure.
could you simply replace all those variables with some tables, even if they only had one row in them?
I'd try to avoid declared variables if at all possible...maybe a stack of CTE's to gather all the data you seemt ot think you need is a possbility as well?
I haven't tripped over a situation yet where i needed to declare so many variables...i'm hoping there is a cleaner solution.
Lowell
November 22, 2010 at 8:54 am
cool thanks lowell
we discussed it as a team and we feel that the simplicity of senibly named, sensibly grouped variables is the best way to make this program straight forward to maintain.
I could definitely use tables and ctes but prefer not to for the sake of future maintenace...
If we know for example that the motor insurance conversion rate from our business-to-consumer site looks wrong we can investigate by looking at that group of 20 odd variables and find where an issue lies.
Ben
^ Thats me!
----------------------------------------
01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010
----------------------------------------
November 22, 2010 at 6:39 pm
I would go with a single table with 2 columns.. Column 1 being the variable name and column 2 being value column.. Then u could make use of some SET-based code to loop thro each row and perform your report...
November 23, 2010 at 10:07 am
I agree: AFAIK, the only restriction on the number of variables is memory space.
Scott Pletcher, SQL Server MVP 2008-2010
November 24, 2010 at 2:36 am
ColdCoffee (11/22/2010)
I would go with a single table with 2 columns.. Column 1 being the variable name and column 2 being value column.. Then u could make use of some SET-based code to loop thro each row and perform your report...
hmm yes that would be another way of doing it, however im a c/c++ coder at heart (not that i ever get to do any of it these days) and these 'table' things of which you speak are but a mythical creature that magically hold on to information for me with varying degrees of reliability, whereas strongly typed variables make more sense to me.
I always struggle to make use of things when I don't know exactly what they're doing in the background. like why is an sql database a .mdf file? whats wrong with having separate .dat, .idx, .exe etc? I like to know where my data is! nevermind eh, I'll adapt...
Ben
^ Thats me!
----------------------------------------
01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010
----------------------------------------
November 24, 2010 at 8:05 am
Having a variable table would be lot more overhead too. I don't see any real gain from it.
If you need ~2K variables, you need them, just go ahead and DECLARE them ๐ .
Scott Pletcher, SQL Server MVP 2008-2010
November 25, 2010 at 12:44 am
ColdCoffee (11/22/2010)
make use of some [font="Arial Black"]SET-based code to loop [/font]
Whiskey Tango Foxtrot, CC?????? ๐
--Jeff Moden
Change is inevitable... Change for the better is not.
December 2, 2010 at 11:31 am
I love ANSI C coding too; but SQL is a dramatically different language, with its own benefits and disadvantages. C is close enough to assembly to be good at loops; tight, fast, efficient. SQL is based on set operations, and excels at operating on a specified subset of certain well-defined columns worth of values in a single operation (and is horrific at loops and at function overhead).
Think of a "table" as a two dimensional array of pointers to memory locations, where each "row" is a "record" and contains an exact data format, and each "column"'s pointers each point to an independent, strongly typed memory location.
Now, if you can calculate 1730 variables exactly, and you're not going to have to hand-type completely different, unpredicable named for each of them, there would appear to be some rhyme or reason behind the naming and thus the purpose, which would indicate that one or more temporary tables (in SQL Server, it's a table whose name starts with a single # sign; that table has a scope of your session's lifetime, is automatically destroyed after your session end, and multiple sessions can all have a #temp table of the same name) would be a cleaner and more extensible solution.
Study up a little on tables, temp tables, and very basic SQL: The 2-day course IBM had in the late 90's was the best introduction I've seen so far (much better than Oracle), and various SQL dialects are about as similar as various C dialects (except HP-UX); the basics are very, very similar.
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply