Worst Practice - Defining Rows that Exceed The Max Length

  • There is nothing wrong with the logic of using TEXT instead of varchar (or nvarchar), except in the situation in which logging is required using TRIGGERS. In this case you are unable to log the changes to the TEXT fields and varchar is more suitable. What you do no need is to ensure that you check the ADO error codes and that you react on size overruns. However, from my experience there are many cases in which the logic is not for the pure approach of total normalisation (all text fields into a single common TEXT table), but rather the pragmatic in keeping like information together, and knowing that it will be very rare to have the size limitation exceeded. This does not imply that all fields are varchar(8000), but that a combination of varchars of different sizes, suited to the data needs, is likely to exceed the 8060 limit in some situations, and yet it is rare that they all are at their max length at the same time. I full agree that the sidepread usage of varchar(8000) for all character fields is worst practise.


    Roger Layton (roger@rl.co.za
    )
    Managing Director, Roger Layton Associates (Pty) Ltd (www.rl.co.za)
    TEL: +27-11-880-9153
    FAX: +27-11-447-5799
    MOBILE: +27-82-881-0380

  • I agree that using lots of varchar(8000) for field definitions is generally bad. But I have some configuration or other special purpose tables where varchars may get very long, but I can't limit them to, say 1000, because that might be too small. Also some data import tools create tables with varchar(8000) fields (DTS?). I don't see anything bad in that if you know what you are doing. The only bad thing that can happen: if cumulative size of the row is too big, it won't be saved in the database. So you have to reduce the data or use blobs. If you have some constraint violation you also can't save data in the table. How is that different? Size limit is also a constraint. "Data would be truncated" also doesn't let you save the row. Not much different for me. The important thing is that you are aware of this limitation and know where it might prevent you from saving data just as any other constraint or limitation.

  • The reason I don't use ntext is that you can't define local variables with that type. I have some pretty complex stored procedures and I need the flexibility of local variables.

  • I once worked for a company whose staff was so ignorant of database design they suddenly began getting errors on their website and could not understand why. They asked me to look into it and the offending insert/update statements were on a table that had over 200 columns. I calculated the maximum size if each of the columns were populated and it exceeded 24,000 bytes. About 75 of the columns were blob fields and there were several repeated fields (such as title1, title2, title3, etc).

    After I was able to stop hysterically laughing, I communicated the problem to the client to the obvious blank look of not comprehending why that kind of problem could not be addressed in a couple hours of bug fixing.

    Karen Gayda

    MCP, MCSD, MCDBA

    Edited by - kgayda on 12/09/2003 11:11:15 AM


    Karen Gayda
    MCP, MCSD, MCDBA

    gaydaware.com

  • Logging via triggers is ok, but instead of triggers give you some options that werent there previously.

    Bravo to anyone who at least traps for the error if it should occur. Bet that is almost always overlooked.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply