Multiple data types for the same column

  • No problem, glad I could help.

    BTW, I noticed that you are using 1 as your seed value for a primary key with a datatype of INT.

    Did you know that INT is a signed type (ie, it allows negative values)?

    So the allowable range for an INT column is

    -2,147,483,648 to 2,147,483,647

    If you are seeding on one you are effectively cutting the datatype in half.

    Some don't like having negative ID's, personally I find them appealing for that very reason (ie, they keep developers and users from becoming dependent on ID values).

  • First of all, your sql worked brilliantly. Thank you. I had to fix some column and table names and throw in a couple of commas but I got the point of what you were doing. Here is the final sql in case anyone else runs accross this same issue:

    SELECT q.questionText, a.answerYear, a.answerID, CONVERT(VARCHAR(100), at.answerText) AS 'AnswerValue'

    FROM dbo.questions AS q

    INNER JOIN dbo.answers AS a

    ON q.questionID = a.questionID

    INNER JOIN dbo.answersText AS at

    ON a.answerID = at.answerID


    SELECT q.questionText, a.answerYear, a.answerID, CONVERT(VARCHAR(100), ad.answerDecimal) AS 'AnswerValue'

    FROM dbo.questions AS q

    INNER JOIN dbo.answers AS a

    ON q.questionID = a.questionID

    INNER JOIN dbo.answersDecimal AS ad

    ON a.answerID = ad.answerID


    SELECT q.questionText, a.answerYear, a.answerID, CONVERT(VARCHAR(100), ai.answerInt) AS 'AnswerValue'

    FROM dbo.questions AS q

    INNER JOIN dbo.answers AS a

    ON q.questionID = a.questionID

    INNER JOIN dbo.answersInt AS ai

    ON a.answerID = ai.answerID


    SELECT q.questionText, a.answerYear, a.answerID, CONVERT(VARCHAR(100), ac.answerChar) AS 'AnswerValue'

    FROM dbo.questions AS q

    INNER JOIN dbo.answers AS a

    ON q.questionID = a.questionID

    INNER JOIN dbo.answersChar AS ac

    ON a.answerID = ac.answerID

    And that will run on the following DDL:

    create table users (

    userID int identity (1,1) primary key,

    userName varchar(100) not null


    create table questions (

    questionID int identity (1,1) primary key,

    questionText varchar(255) not null


    create table answers (

    answerID int identity (1,1) primary key,

    questionID int not null,

    answerYear numeric(4,0) not null,

    userID int not null, /* user who entered the answer */

    foreign key (questionID) references questions (questionID),

    foreign key (userID) references users (userID)


    create table answersInt (

    answerID int primary key,

    answer int not null,

    foreign key (answerID) references answers (answerID)


    create table answersDecimal (

    answerID int primary key,

    answer decimal(9,2) not null,

    foreign key (answerID) references answers (answerID)


    create table answersChar (

    answerID int primary key,

    answer varchar(255) not null,

    foreign key (answerID) references answers (answerID)


    create table answersText (

    answerID int primary key,

    answer text not null,

    foreign key (answerID) references answers (answerID)


    So I assume that I just widen that datatype used in the CONVERT function as needed right? I mean if I get a text answer I would need to convert them all to text correct?

  • In response to your suggestion about negative identity columns:

    Yes, I like your idea about making them less nice to work with so as to discourage app developers from being dependant on them, as well as getting the full range of use out of the Int type.

    So far we haven't even come close to needing the full range of the Int datatype. On a lot of tables I could even get away with using SmallInt. I almost always use TinyInt on lookups that I know will never need a larger range of values.

    As far as the app developers being dependant on them, I am not sure if that is the case. I hope they aren't hard coding them in EVER. I can think of a couple of tricks to find out though.

    Thanks again for your help. I have only been working for about a year and a half now and I really appreciate forums and people who post solutions and suggestions.

  • No problem, let me know if you need help.

    You're right on about the CONVERT type.

    What I would do to find the correct varchar length would be to look at your types and find the longest...

    Also, don't forget to look up "CAST and CONVERT" in BOL to find the style numbers to get your types all converted into the desired format.


  • Thanks.

    Actually, if just one of the answers are ever large enough to require that they be stored as text datatype, I would need to convert everything to text I would think.

    Yes, I know what you mean, I use that third argument on CONVERT to get the right datetime format all the time. I will probably have to nest a CONVERT inside the outer convert to get it how I want and then display as text or varchar.

  • The thought occured to me that there was some change from SQL Server 2000 to 2005 regarding Text, so I googled it and I was right.

    So I don't have to use text ever, thus eliminating any restrictions I thought I had to work within. I could now use a single stored proc to determine which answer table to store the answer data in. This will simplify the apps that insert and update the survey data.

  • If you make the decision in the proc, it will have to redo the execution plan everytime AFAIK.

    That's why I recommended doing it in the application's DAL layer instead.

    Does that make more sense?

  • Hmmm, perhaps I don't understand this. I thought that all stored procedures are compiled and stored by sql server, and as such offer good performance. I guess I don't understand what is meant by "execution plan". I assume by DAL you are referring to Data Access Layer?

    Another performance based question I have is this. It seems I have two options for table structure. Now that I know varchar(max) replaces text datatype in sql server 2005 I could go with one single answers table, and store the answer data in one column of type sql_variant. Alternatively I could break it out into seperate tables like I did and use the view you worked out to simplify the development of the reports.

    I guess I am wondering about the performance differences. More importantly, how can I calculate the performance differences myself in some concrete, measureable way. I got my BS in computer science and naturally they didn't teach us anything practical like this. I know how to approximate the time of an algorithm, but that only works if you know the steps. All this stuff we do with sql server happens under the hood and is a black box to us. Is there somewhere you can get information about the algorithmic efficiency of each operation? Like I know that in general an index lets me select something in nlogn time. What about union? What about inner join?

    Hmm, I just wrote an awful lot. I don't seriously expect anyone to answer all that. Any suggestions are always greatly appreciated though!

  • lol, I'm totally following you.

    It's late here, the screwdrivers are sinking in and the wife wants me to come to bed...I'll get you a reply in 12 hours or so...


  • Okay, first, I haven't worked with SQL 05 yet, I'm still up to my neck in bad 2000 databases. So I'm not 100% that what I'm saying is what 05 does.

    Your first question, what is compiled and stored is the execution plan.

    Open query analyzer, go to query, "Display Estimated Execution Plan".

    Look in the results pane, and click on the "Estimated Execution Plan" tab if it's not selected. Now, mouse over the icons.

    That will give you the stats to improve performance.

    I would advise against the variant type. If you know something is a boolean or an int, then I believe you should constrain it to be that.

    For example, I saw recently a database (not mine!) that used CHAR(42) for a phone number field. Needless to say, they were unable to contact a large number of their customers due to the bad/invalid phone numbers.

    SQL server is not VBscript where everything is a variant. It has strong types for a reason. To keep applications(or more precisely, data from applications) from invalidating the constraints.

    Also, keep in mind that those type of constraints (yes, technically a type is a constraint) is a data constraint. Not anything close to business logic (when you attempt to put proper data constraints on a database, some will try to claim that you are putting business logic in. They are wrong.)

    As for doing perfomance related stuff, go to Enterprise Manager -> Tools -> SQL Profiler.

    Also, for basic comparisons of query performance, just compare the time.

    The execution plans can really help you improve stuff, it just takes awhile to be good at it.

Viewing 10 posts - 16 through 24 (of 24 total)

You must be logged in to reply to this topic. Login to reply