Amit - good article and "nice to have" if you're stuck with maintaining databases developed by someone else.
I've had long and heated arguments with some of my colleagues on the choice of datatypes - I prefer to finetune everything and use just what I need and since I'm the developer it's easy for me to ensure this. However, I've still had to fight to assign "tinyint" to lookup tables where I know that the # of rows will NEVER exceed 50...someone with many years of dba experience under his belt told me that the reason to "play safe" and always keep with the int datatype (eg.) is for portability - so that if the database ever had to be imported into Oracle etc..there wouldn't be any conflicts!
What is your take on the portability issue ?!
Lastly, I quote from one of my favourite authors - Robert Vieira ("SQL Server 2000 Programming") -
"Choose what you need, but ONLY what you need. eg. if you're trying to store months (as the number 1-12) - those can be done in a single byte by using a tinyint. Why then do I regularly come across databases where a field that's only going to store a month is declared as an int (which is 4 bytes) ? Don't use nchar or nvarchar if you're never going to do anything that required Unicode - these datatypes take up 2 bytes for every one as compared to their non-Unicode cousins.
There is a tendency to think about this as a space issue - "Ah, disk space is cheap these days!" Well, beyond the notion that a name brand SCSI drive still costs more than I care to throw away on laziness, there's also a network bandwidth issue. If you're passing an extra 100 bytes down the wire for every row, and you pass a 100 record result, then that's about 10K worth of extra data you just clogged your network with - if you have 100 users performing 50 transactions per hour - that's over 50MB of wasted network bandwidth per hour."
**ASCII stupid question, get a stupid ANSI !!!**