Clean out all bad characters from a string.

Kenneth Fisher, 2015-03-25

I have a customer who is having a problem with a load. He is getting a fair number of bad characters including some unicode characters. The unicode characters in particular are making it so those values won’t go into a varchar column. This isn’t an uncommon problem so when he came to me I said (without thinking) “Well just clear out the bad characters.” Unfortunately after having a few minutes to think about it I realized that’s easier said than done. I did some research and the best I could find were here and here. In one case each string is parsed one character at a time and in the other each string is cleared by using a while loop that clears out any bad character one at a time until none are left. Neither really satisfied me. Both are RBAR (row by agonizing row) and neither is particularly fast. So I wanted a set based method.

To start with I created some test data.

-- Create a table with a bunch of rows and one column 
-- with string data.
SELECT CAST(t.name + ' ' + c.name AS nvarchar(max)) AS StringToFix
	INTO BadStringList
FROM sys.tables t
CROSS JOIN sys.all_columns c
ALTER TABLE BadStringList ADD Id INT NOT NULL IDENTITY (1,1) 
     CONSTRAINT pk_BadStringList PRIMARY KEY
-- Put in one random (probably bad) character into about 
-- 2 percent of the rows.  Then do it 75 times
; WITH MyCTE AS (
SELECT TOP (2) percent StringToFix, STUFF(StringToFix, 
		CAST(rand((len(StringToFix) * datepart(ms,getdate()))^2) * len(StringToFix) AS Int) + 1, 1,
		NCHAR(CAST(rand((len(StringToFix) * datepart(ms,getdate()))^2) * 65025 AS Int))) AS Stuffed
FROM BadStringList
ORDER BY NEWID()
)
UPDATE MyCTE
SET StringToFix = Stuffed
GO 75

Here is my clean up code. Note it doesn’t actually clean the bad data out of the source, it produces a result with clean data.

DECLARE @Pattern varchar(50) = '%[^a-zA-Z0-9_''{}"() *&%$#@!?/\;:,.<>]%';
WITH FixBadChars AS (SELECT StringToFix, StringToFix AS FixedString, 1 AS MyCounter, Id
				FROM BadStringList
				UNION ALL
				SELECT StringToFix, Stuff(FixedString, PatIndex(@Pattern, 
					FixedString COLLATE Latin1_General_BIN2), 1, '') AS FixedString, 
					MyCounter + 1, Id
				FROM FixBadChars
				WHERE FixedString COLLATE Latin1_General_BIN2 LIKE @Pattern)
SELECT StringToFix, FixedString, MyCounter, Id
FROM FixBadChars
WHERE MyCounter = 
		(SELECT MAX(MyCounter) 
		FROM FixBadChars Fixed
		WHERE Fixed.Id = FixBadChars.Id)
OPTION (MAXRECURSION 1000);

This returned ~170k cleaned rows in under 30 seconds.

So here is what it does. Let’s start with the pattern I’m using.

DECLARE @Pattern varchar(50) = '%[^a-zA-Z0-9_''{}"() *&%$#@!?/\;:,.<>]%';

A basic start is this ‘[a-z]’ which will match a single character a-z. ^ in pattern matching is a NOT, so [^a-z] matches anything that is NOT a-z. Then we add in all the other characters [^a-zA-Z0-9_”{}”() *&%$#@!?/\;:,.] which will match any character that is not an upper or lower a-z, a didget 0-9, or one of the symboles listed. This is our approved list. Fair warning I was not able to figure out how to get it to work with []s in the list. Maybe someone who is better at pattern matching than me can figure it out. Last but not least add %’s on either end and you have pattern that will find a string that has any character that is not in the approved list.

Next I’m using a recursive CTE to loop through the string. Notice that the anchor is a test against the string to be fixed to see if there are any more bad characters. I also have a counter so that in the main query I can pull the entry for each string with the largest counter. That being the fixed row. I’m using a subquery to get the fixed row, joining on my primary key Id. In my case the query also works fine if you use StringToFix (assuming that it is unique). Basically you just need a unique per row value to join on. It helps if it’s indexed (30 seconds vs 45 for my run).

WHERE MyCounter = 
		(SELECT MAX(MyCounter) 
		FROM FixBadChars Fixed
		WHERE Fixed.StringToFix = FixBadChars.StringToFix)

Two more minior settings.

COLLATE Latin1_General_BIN2

The COLLATE is necessary because otherwise some unicode characters get missed by the PATINDEX command. I’m using SQL_Latin1_General_CP1_CS_AS as my default collation but any collation should work.

OPTION (MAXRECURSION 1000)

The recursive CTE will recurse once for each bad character in a string. So the deepest the query will recurse is the maximum number of bad characters in a single string. Realistically the default MAXRECURSION of 100 is probably sufficient in this case but I believe in better safe than sorry.

And that’s it. A non-RBAR way to clean a string. Jeff Moden would be so proud.

Filed under: Microsoft SQL Server, Problem Resolution, SQLServerPedia Syndication, T-SQL Tagged: language sql, microsoft sql server, problem resolution, sql statements, T-SQL

Rate

Share

Share

Rate

Related content

Database Mirroring FAQ: Can a 2008 SQL instance be used as the witness for a 2005 database mirroring setup?

Question: Can a 2008 SQL instance be used as the witness for a 2005 database mirroring setup? This question was sent to me via email. My reply follows. Can a 2008 SQL instance be used as the witness for a 2005 database mirroring setup? Databases to be mirrored are currently running on 2005 SQL instances but will be upgraded to 2008 SQL in the near future.

Robert Davis

2009-02-23

1,567 reads

Networking – Part 4

You may want to read Part 1 , Part 2 , and Part 3 before continuing. This time around I’d like to talk about social networking. We’ll start with social networking. Facebook, MySpace, and Twitter are all good examples of using technology to let…

Andy Warren

2009-02-17

1,530 reads

Speaking at Community Events – More Thoughts

Last week I posted Speaking at Community Events – Time to Raise the Bar?, a first cut at talking about to what degree we should require experience for speakers at events like SQLSaturday as well as when it might be appropriate to add additional focus/limitations on the presentations that are accepted. I’ve got a few more thoughts on the topic this week, and I look forward to your comments.

Andy Warren

2009-02-13

360 reads