Blog Post

Requiring Columns Over 900 Bytes to Be Unique

,

Had a friend call me recently to ask about how to do this, had an nvarchar(2000) column that needed to be unique values. If you've ever tried this you'll know that SQL will let you do - with a warning! All works fine if your column initially has less than 900 bytes for any row but the first time you try to update to 901 bytes - bam! Now I know many of you will be shaking your head about the idea of an index that big anyway and I don't disagree. At the same time it was just a business problem to solve and he was hoping that there would be a quick way to do without a lot of changes. In other words, having the column be unique was important but it was far less important how it ended up that way.

No good solutions to this. My first thought was to hash the value and put a unique index on the has. Doable, adds a bit of overhead, index would work fine. Another idea would be to create a computed column on the left 900 bytes and index it, that would work unless we really had values over 900 bytes (which is really a key question, many columns are defined far larger than the average value just in case) and wouldn't break the index even then, it would just cause us to treat some rows as unique that really were not (because we'd lose the trailing 1100 bytes). A variation of the first idea would be to move the large column to a lookup table, create a key (hash, guid, or identity) and store that in the main table with a unique index applied). But the easiest - if clunky - is to just do an exists check in both insert and update triggers.

Moving the logic to the trigger was the easist solution because it was essentially the same logic they were already using in a stored proc. They were hoping for a miracle of technology with some new feature in 2005, but even included columns won't solve this one.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating