March 25, 2002 at 1:12 pm
I am using a script to basically make a copy of an existing SQLServer 2000 table under a new name. According to a count of the number of bytes in the original table, the row length is 6013. When I run the script in query analyzer to create the new table, the table is created but I'm warned that the maximum row length is nearly double the 6000, and that inserts will fail, etc. What am I missing here?
Thanks to any and all.
Dave Dufour
Dave Dufour
http://www.du4.com/dave
March 25, 2002 at 1:56 pm
Sounds like maybe you scripted everything as unicode, using nchar, nvarchar, etc?
Andy
March 25, 2002 at 2:16 pm
March 25, 2002 at 2:26 pm
Did your original table use varchars or nvarchars? Unicode takes twice as many characters as a plain char.
Andy
March 25, 2002 at 2:42 pm
yes, the two tables are essentially identical, but the duplicate has an on additional int field (an ID). we're using nvarchars all the way through. One of these is particularly large (3000). It may be that I need to split that one off into its own table. Are you saying that an nvarchar (50) for example actually uses 100 bytes?
Dave Dufour
Dave Dufour
http://www.du4.com/dave
March 25, 2002 at 3:38 pm
That's correct. N's take twice the space to store the data, Unicode takes two bytes to represent one "character".
Andy
March 26, 2002 at 6:32 am
Thanks. That makes a big difference. Can I replace these nvarchar fields with the text type field? The app we have built needs will take input from Europe as well as the US. It appears to accept the European type characters Ä,Ö,ß and the like, and doesn't produce the overhead. Is there a downside to this?
Dave Dufour
Dave Dufour
http://www.du4.com/dave
March 26, 2002 at 6:52 am
If you know the maximum lenght is 4000 then varchar should be able to support you. Try it in another table with these characters. If not then ntext will support you but you will not be able to perform a query using the column with the text search engine installed.
"Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)
March 26, 2002 at 7:00 am
If you're supporting international apps I'd be careful about straying from Unicode fields. If you need a text column, use ntext. If you can get by with the varchar you still want to reexamine your row length, 6000 byte rows will really kill your page IO rates. Narrow columns and narrow tables give the best performance. Really depends on how and how often you will access it whether it is worth changing.
Andy
March 26, 2002 at 8:22 am
Just to be clear, if I were to switch this 3000 character nvarchar to an ntext field, it would accomodate all the unicode BUT I couldn't do a text search on that field? This particular long field is one we'd never search on, so I'm thinking this may be our solution. However, it is included in several views and stored procedures. The other question is, can I just change the data type without losing anything that's already there? It appears that I've been able to do this in tests changing the nvarchar to a text field. Unless of course, I'm missing something.
Dave Dufour
Dave Dufour
http://www.du4.com/dave
March 26, 2002 at 8:27 am
You can change to text or ntext, the only time you "might" lose data is if you go from unicode to non-unicode. You can always use full text search if you need to search the text column.
Andy
March 26, 2002 at 9:55 am
Many thanks for the assistance. This has helped greatly.
Dave Dufour
Dave Dufour
Dave Dufour
http://www.du4.com/dave
Viewing 12 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply