October 24, 2012 at 12:57 pm
Hi,
Is it best to use varchar or char for small text columns, max 5 to 10 chars?
An example is currency where the code can be 3 to 5 chars and it has a pk, fk and clustered index.
Thanks,
Pedro
October 24, 2012 at 1:08 pm
Small text columns over two characters in length should almost aways be varchar instead of char.
The only exception would be when you are absolutely sure that every value will be the same number of characters.
October 24, 2012 at 1:21 pm
A char(2) takes 2 bytes, a varchar(2) takes 2-4 bytes, A char(3) takes 3 bytes, a varchar(3) takes 2-5 bytes, etc. I'll usually switch around 4-6 characters, unless the column will always have a certain number of characters, in which case it's char.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 24, 2012 at 2:29 pm
Thanks,
About joins do they have the same performance, since char(2) is always 2 bytes but varchar(2) can have a "dynamic" size?
Pedro
October 24, 2012 at 2:36 pm
Test and see?
I would guess pretty much comparable.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 24, 2012 at 2:46 pm
Thanks... right, test... :blush:
Just asked since i'm on my tablet and cant test it right away...:-D
First thing to do in the morning.... varchar(5) with different size values in 10.000 rows table joining with a 1.000. See how it does compared to char(5). 😎
Thanks,
Pedro...
PS: dont give a fish to the hungry... teach him how to fish. 😉
October 24, 2012 at 3:09 pm
Try 10 000 000 and 100 000. Don't test on small numbers of rows, the results will be lost in the measuring error.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 25, 2012 at 4:06 am
Hi,
Created the main tables with 91.000 rows and the reference tables with 10.000.000 rows.
Same data on both pair of tables.
Ran the queries and the time and IO individually were the same.
But when I ran together the queries:
PRINT '---------------------------------------------'
SELECT s.col1fk from stable_c s inner join ptable_cc p on s.col1fk = p.col1 where p.col1 like 'RAB%'
PRINT '---------------------------------------------'
SELECT s.col1fk from stable_vc s inner join ptable_vc p on s.col1fk = p.col1 where p.col1 like 'RAB%'
PRINT '---------------------------------------------'
sometimes I got:
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
---------------------------------------------
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
(114 row(s) affected)
Table 'stable_c'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'ptable_cc'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
---------------------------------------------
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
(114 row(s) affected)
Table 'stable_vc'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'ptable_vc'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 66 ms.
---------------------------------------------
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
The 66ms for the elapsed time on the second query wasn't always there.. most the time it also was 0ms or 1ms...
Is that time given by the WAITs, probably IO, the 2nd query has?
Thanks,
Pedro
October 25, 2012 at 4:32 am
Probably waits, yes.
3 reads doesn't look like there was much data...
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 25, 2012 at 5:05 am
GilaMonster (10/25/2012)
Probably waits, yes.3 reads doesn't look like there was much data...
That query only returns 114 rows...
Changed the query to return 286k rows and the IO is quite different (on the "detail" table), since CHAR always has 5 bytes....
Table 'stable_c'. Scan count 1, logical reads 1037, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'ptable_cc'. Scan count 1, logical reads 16, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'stable_vc'. Scan count 1, logical reads 639, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'ptable_vc'. Scan count 1, logical reads 23, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Same CPU time...
I made the column 5 bytes long, but I inserted data from 1 to 5 bytes... that's why char has more reads than varchar...
The execution plan states that the char has a cost of 47% and varchar of 53%... The merge join on char is 37% and on varchar is 44%.
Thanks,
Pedro
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply