A New (and Hopefully Better) Approach to Constants

  • Comments posted to this topic are about the item A New (and Hopefully Better) Approach to Constants

    SQL#https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
    Sql Quantum Lifthttps://SqlQuantumLift.com/ ( company )
    Sql Quantum Leaphttps://SqlQuantumLeap.com/ ( blog )
    Info sitesCollations     •     Module Signing     •     SQLCLR

  • Why not declare a user defined function for each constant required? This will work in SQL Server 2000 as well.

    For example:

    [font="Courier New"]CREATE FUNCTION Const_Address_Work()




    RETURN 1;


    Then simply use it like so:

    [font="Courier New"]SELECT addr.Line1, addr.Line2, addr.City, addr.State,


    FROM Addresses addr

    WHERE addr.AddressType = dbo.Const_Address_Work()[/font]

    This has the added benefit that you can then view dependencies for this specific function/constant to see what other source code refers to it. Very useful if a business rule change will affect code using that constant.

  • The only real problem with this approach is that you have a lot of disk access as SQL Server might have to repetitively get the same value over and over again

    If you've got disk access issues for what would be such a small table, then I'm sorry but you've got bigger problems. In reality this sort of table would reside almost permanently in data cache. Thus all requests would happen in memory and not result in disk access.

    Personally, I'd go for the table and/or udf approach before installing an extra item on the server.

    Colt 45 - the original point and click interface

  • There is a potentially big penalty to be paid with using functions in the maner you suggest.

    The problem lies in the fact that functions are in-line by nature. This means that SQL Server will evaluate the function for each row in the table thus resulting in a full table scan.

    Inorder to prevent this you will need to declare a variable that you can assign the value to, e.g

    DECLARE @var

    SET @var = dbo.udf()

    SELECT * FROM sometable WHERE somefield = @var


    SELECT * FROM sometable WHERE somefield = dbo.udf()

    will give you FTS

  • I believe that by definition a function that is used to set/retrieve a constant should be used as you've shown.

    Putting my mis-shaped, ill-fitting programmer hat on, isn't it standard practice to define constants at the start of a module/class/etc...

    Ok, now I'll put my nice DBA hat back on. 😉

    Colt 45 - the original point and click interface

  • I think at the method(stored proc) level defining constants as set @somevar=somevalue is easy to do, readable, and maintainable. I saw the earlier comment about module level constants and while not encouraged, they happen and often even make sense, and it's this area where SQL really has no option other than to table out the constants. Imagine that SQL had a built in CONST(constname) that worked in db scope, would we use it? I suspect some would, but in other cases the constant would probably be one more switch on some config table somewhere.

    I guess - it being early and therefore being contrary - that I don't see what a .Net language does to help us in this case? Im not opposed to using the CLR where it fits, just not sure its valuable here.

  • why why why ?????? SQL is set based why must developers constantly seek to overcomplicate things - you only need a lookup table.

    I've encountered a software release ( on a major big system ) where the delopers sought to implement functions as constants to replace 12 simple values required in a lookup table. The result as pointed out earlier was to turn all the queries into row based ( cursor ) ops and actually made the whole project unworkable.

    It probably looks fine on a laptop with a tiny data set but scale it up into production data sets and you're in for big trouble.


    [font="Comic Sans MS"]The GrumpyOldDBA[/font]

  • The point made in a post above about using a scalar-valued function in the WHERE clause resulting in a full table scan is incorrect. When the function has no parameters or none of the parameters passed into the function come from columns in the table then SQL Server can evaluate the function once and use an index seek (assuming there is an appropriate index).

    In the example below, the execution plans for queries in options 1 and 2 are the same, utilizing a clustered index seek. Option 3 (while admittedly silly) passes in a column from the table as a parameter to the function and thus requires a full table scan.

    CREATE TABLE #Letter


    Letter char(1) PRIMARY KEY NOT NULL,

    Number int NOT NULL


    INSERT #Letter

    VALUES ('A', 1)

    INSERT #Letter

    VALUES ('B', 2)

    INSERT #Letter

    VALUES ('C', 3)

    CREATE FUNCTION dbo.GetFirstLetter(@text varchar(100)) RETURNS CHAR(1)



    RETURN LEFT(@text, 1)


    -- Option 1

    DECLARE @firstLetter char(1)

    SET @firstLetter = dbo.GetFirstLetter('Apple')

    SELECT * FROM #Letter WHERE Letter = @firstLetter

    -- Option 2

    SELECT * FROM #Letter WHERE Letter = dbo.GetFirstLetter('Apple')

    -- Option 3

    SELECT * FROM #Letter WHERE dbo.GetFirstLetter(Letter) = 'A'

  • Does the constant HAVE to be stored in a table??

    Would it not be easier to just have the UDF return the value? (it is constant after all)

    as in:

    CREATE FUNCTION fnGetConst(@cWanted INT)





    SELECT @RetVal = case @cWanted when 1 THEN 'boo'

    when 2 THEN 'hoo' END

    return @RetVal


  • This just shows how people will overly complicate a set-oriented or SQL based solution because they are trying to fit it into a world they are more comfortable with.

    And this article is just a shameless plug to go to the author's site to download his cool stuff.

    In compiled-programming we use constants so we don't have the magic #100 spread throughout the code. When the "max-cartons-per-palette" rule changes to 150, we don't want to find all the cases of "100" and try to figure out whether it was *that* 100 or some other 100 in order to safely change it to 150.

    Under the hood, the compiler turns "MAX_CARTONS" into the literal 100 for when the program actually executes.

    In SQL and table-oriented designs, there are 2 major differences.

    First, the code is interpreted (generally) and not compiled. Other posters have already pointed out the pitfalls with some of these "functions-for-constants" approach.

    Second, and more importantly, this is precisely what lookup tables were designed for-- to assign a simple constant to something bigger that could change, like a status-code-description.

    Far and away the most common "constant" situation I encounter are things like "status code 3 means 'failed'". So people start coding around it in all sorts of convoluted ways because they come from a programming background where they were taught that hard-coding a "3" in their code is EVIL and they will go to programmer hell for it. I saw one case where the programmer was doing:

    select ...

    from ...

    where StatusCode = (select StatusCode from LookupTable where description = 'failed')


    The problem is-- it's not the NUMBER that will change. It's the other parts. Anything *except* the number might change. "Failed" might become "Failure" or "Didn't Work". But in good lookup table design, the 3 will remain always and forever the surrogate to identify the underlying concept of a failed state.

    The lookup table provides a constant, never changing source of this fact and is always available for reference if needed. The 3 will never change.

    It would be NICE if T-SQL allowed more structured programming, like defining other modules that you can "include" and such. Basically if it were compiled. But for now it's not. And workarounds like this article proposes just make it more complex without adding value.

    The best solution I've seen is simply to work-around T-SQL's language by declaring constants at the top of a procedure:

    declare @STATUS_FAILED int; set @STATUS_FAILED = 3;

    ... (later in code) ...

    select field

    from table

    where Status = @STATUS_FAILED;

    ... Then use that variable (upper-case to make visually distinct from ones that can change during the procedure) in the rest of the code.

    We don't have to worry about ever changing the 3 to mean something else, so it isn't a case where having some named constant makes maintenance easier.

    An almost-as-good alternate is to just hard-code the numeric with a comment:

    SELECT ....

    FROM MyTable

    WHERE Status = 3 /*failed*/

    Remember, for all the cases driven by lookup tables, it's the *number* (or whatever PK you used) that doesn't change.

    For other numeric constants like Int-Upper-Bound or Pi or something, I agree with the other poster about using a one-row table. I personally use an instead-of upd/del trigger to disallow changes to that table except for official rollout scripts we publish. But same idea.

    To the author: By the way, please go back and read more on relational. Not only because of the nonsense about constants, but to also understand why identifying them by "Western" or "Region.Western" has nothing to do with relational theory at all. Start with Chris Date's stuff.

  • Jobob Smith (10/16/2007)

    Does the constant HAVE to be stored in a table??

    Would it not be easier to just have the UDF return the value? (it is constant after all)

    as in:

    CREATE FUNCTION fnGetConst(@cWanted INT)





    SELECT @RetVal = case @cWanted when 1 THEN 'boo'

    when 2 THEN 'hoo' END

    return @RetVal


    In relational databases lookup tables are the way to go. Period.

    For several reasons:

    1. Flexibility

    2. The engine is optimized to handle tables ( use statistics of the values etc...)

    3. That is the way Relational databases are supposed to be handled.

    4. Totally agree that the table will remain in memory most of the time and no disk access is incurred!


    * Noel

  • I would've thought a simple view would be OK for the previous example, and hard-code the "3" in it. This view can then be used everwhere, thus there should only be one hard-coded occurence of "3", (which is the minimum required - even constants hard-code the value in their definition!). Or am I missing something?

    As for all other "constants", why can't you define it anywhere you like? They'll all require the hard-coding of something. A function for its name, (plus any parameters if you follow the article's suggestions), whilst tables need their name and any identifying column/value, (depending if it only has one row/multiple columns, or two columns/multiple rows).

    Until SQL Server offers an actual "CONSTANT" keyword, then just use whatever's appropriate to your project, and stop moaning about everyone else's choices.

    Paul H


  • [font="Verdana"]Well, I would have to agree that the article is a bit of a shameless plug in order to get us to download the cool stuff offa their site! But as far as I can see, the stuff I downloaded IS quite cool, so no harm done IMO, in fact saved me a bit of work! 😛[/font]

    Con mucho carino,

    RiK Munoz

  • constants don't belong in sql , set a view as a constant? god help us!

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]

  • I cannot see the benefit of invoking the CLR for such a trivial purpose. The ideas of a single row table/view with the constants as columns is probably best. Someone else has already rightly pointed out that a function that does not accept params and only has a simple return statement is easily handled by the SQL engine and doesn't result in table scans.

    The other big problem I saw with the bit of the article I read was the author's claim that the .NET hashtable equivalent stores and returns varchars, which can IMPLICITLY be converted to ints, etc.... When you compare your int data in a table with the varchar result of this hashtable, you may have your nicely indexex int column being implicitly converted to a varchar... yuk. 🙂

Viewing 15 posts - 1 through 15 (of 23 total)

You must be logged in to reply to this topic. Login to reply