Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««1234»»

Remove non printable characters Expand / Collapse
Author
Message
Posted Friday, April 2, 2010 9:05 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Yesterday @ 6:07 PM
Points: 11,194, Visits: 11,109
carl.anderson-1037280 (4/2/2010)
I see. So if you want to remove the full set of 37 unprintable characters, you would have to create 37 levels of nesting with REPLACE. I guess the computer scientist in me is concerned about 37 full string scans per value when Jeff's code only does one.

REPLACE is pretty fast, especially when a binary or SQL collation is used.

However, Jeff also uses the PATINDEX function and double wildcards, so I guess it says something about the performance of PATINDEX if the nested REPLACE functions are still faster.
Seems like there should be a better way! CLR maybe?
Maybe a better question is, why is a deeply nested REPLACE so much faster than Jeff's solution

The answer is that optimizing the replacement operation is much less important than optimizing the overall query plan. Let me explain that...

The nested REPLACE uses in in-line table-valued function, which is logically just a parameterized view. The definition of the 'function' is expanded in-line into the query plan of the calling statement before SQL Server optimization occurs. This is the reason that the whole operation can be performed with a single scan and a Compute Scalar.

Scalar T-SQL functions cannot be in-lined in this way, and are invoked once per row using a relatively slow interface. There overheads are such as to completely dominate the overall cost.

A CLR scalar function uses a much faster invocation path, but the overhead of passing the values row-by-row to and from the hosted environment again dominate the overall cost.

Believe me (and Jeff will attest to this) when I say I am very keen on appropriate use of SQLCLR integration. Unfortunately, an in-line function will out-perform a CLR scalar function every time in this sort of task. The difference can be made small (and CLR is much faster than a T-SQL scalar function) but never faster than the fully in-lined solution.

Paul




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #895702
Posted Friday, April 2, 2010 6:34 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Yesterday @ 8:58 PM
Points: 36,794, Visits: 31,253
Paul White NZ (4/2/2010)
Believe me (and Jeff will attest to this) when I say I am very keen on appropriate use of SQLCLR integration. Unfortunately, an in-line function will out-perform a CLR scalar function every time in this sort of task. The difference can be made small (and CLR is much faster than a T-SQL scalar function) but never faster than the fully in-lined solution.

Paul



Heh... attest and certify. The cool part is that you're not one of those folks that thinks SQLCLR is a panacea and you use it wisely.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #896089
Posted Monday, April 5, 2010 3:58 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: 2 days ago @ 8:23 AM
Points: 2,025, Visits: 2,521
The REPLACE solution can be made to run an order of magnitude faster if we use an in-line table-valued function instead of an evil scalar function:


I am not getting the exact meaning of this statement.

what do you mean by order of magnitude ?

you mean to say even though if we use REPLACE function in an in-line table valued function the performance will be good. Am i correct?


karthik
Post #896598
Posted Monday, April 5, 2010 4:32 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Yesterday @ 6:07 PM
Points: 11,194, Visits: 11,109
karthikeyan-444867 (4/5/2010)
The REPLACE solution can be made to run an order of magnitude faster if we use an in-line table-valued function instead of an evil scalar function:
I am not getting the exact meaning of this statement. what do you mean by order of magnitude?

Order of magnitude = 10 times faster.

you mean to say even though if we use REPLACE function in an in-line table valued function the performance will be good. Am i correct?

Yes. Test it, and examine the query plan to understand it fully.




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #896605
Posted Tuesday, April 6, 2010 2:11 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Wednesday, May 21, 2014 9:26 AM
Points: 9, Visits: 99
Hi Paul,

thanks for your replies and corrections. I actually put together a version of your example that nests 31 REPLACE functions and then tested it against Jeff's while loop. As it turns out, they are closer performance-wise than you think.

I'm testing on real-life medical data, so I can't share it. But I can tell you that it's varchar(1200) columns containing RTF with very few actual unprintable characters per row. I'm stripping unprintables so I can use a FOR XML query to concatenate RTF fragments into whole documents; unprintable characters are apparently not allowed.

I'm selecting the top 60,000 from a table of 13,000,000 rows and it takes about 30 seconds for the nested REPLACE function to complete, whereas it takes about 40 seconds for Jeff's WHILE loop function.

Interestingly enough, when I select INTO a temp table, either solution completes in almost exactly the same amount of time, or about 26 seconds each.

By selecting into a temp table I can clean the RTF for the full table in about an hour. Is this the best we can do??? Maybe I'm missing a query option or something???

Thanks again,

Carl

Here's my code for your reference:
CREATE  FUNCTION dbo.IF_CleanWithReplace
(
@SomeText VARCHAR(MAX)
)
RETURNS TABLE
WITH SCHEMABINDING
AS RETURN
SELECT
cleaned =
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
@SomeText COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS,
char(0) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(1) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(2) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(3) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(4) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(5) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(6) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(7) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(8) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(9) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(10) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(11) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(12) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(13) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(14) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(15) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(16) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(17) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(18) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(19) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(20) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(21) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(22) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(23) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(24) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(25) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(26) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(27) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(28) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(29) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(128) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0));
GO
select top 60000 clean.* from dev.Y_RTF_NOTE_TEXT cross apply dbo.IF_CleanWithReplace(NOTE_RICH_TEXT) as clean

Post #897887
Posted Tuesday, April 6, 2010 2:29 PM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Yesterday @ 6:07 PM
Points: 11,194, Visits: 11,109
Carl,

Two suggestions:

1. Define @SomeText AS VARCHAR(8000) instead of VARCHAR(MAX)
2. Use the LATIN1_GENERAL_BIN collation - and only on @SomeText, not the replace chars.

VARCHAR(MAX) variables require a tempdb worktable, whereas VARCHAR(8000) does not.

A binary collation is even faster than a SQL collation, and safe to use with your replacement characters, since you are using code point values.

Paul




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #897901
Posted Tuesday, April 6, 2010 3:24 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Wednesday, May 21, 2014 9:26 AM
Points: 9, Visits: 99
Excellent suggestions Paul!

I changed the argument varchars from MAX to 1200 and that improved performance marginally ~5%. I switched the collation type and selected the top 60,000 again but found it still took between 30 and 35 seconds. However, when I selected INTO a temp table I saw that performance had jumped by a factor of 4! I went from 26 seconds to just 6!

This means I can clean the 13 million rows in about 25 minutes now. Ideally I would like to cut this in half, but it's still a big win. I guess I have to read up on collations now!

Carl

Here's the updated code for anyone interested:
CREATE  FUNCTION dbo.IF_CleanWithReplace
(
@SomeText VARCHAR(1200)
)
RETURNS TABLE
WITH SCHEMABINDING
AS RETURN
SELECT
cleaned =
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
@SomeText COLLATE LATIN1_GENERAL_BIN,
char(0),''), char(1),''), char(2),''), char(3),''), char(4),''),
char(5),''), char(6),''), char(7),''), char(8),''), char(9),''),
char(10),''), char(11),''), char(12),''), char(13),''), char(14),''),
char(15),''), char(16),''), char(17),''), char(18),''), char(19),''),
char(20),''), char(21),''), char(22),''), char(23),''), char(24),''),
char(25),''), char(26),''), char(27),''), char(28),''), char(29),''),
char(128),'');
GO

PS> I replaced the SPACE(0) with '' - it didn't seem to change performance. What's the idea behind SPACE(0)??
Post #897942
Posted Tuesday, April 6, 2010 3:35 PM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Yesterday @ 6:07 PM
Points: 11,194, Visits: 11,109
Thank you, Carl.

As far as the actual requirement is concerned, concatenating RTF fragements (I'm not even going to ask why they are like that to start with) might be best done outside the database. RTF processing solutions I have seen before (admittedly converting from RTF to another format) used SSIS to bulk-read the data from SQL Server, and a C# or VB.NET component called from a Script Task to do the hard work. There is some kind of RTF support in the standard library IIRC, and plenty of code samples out there.

One notable advantage of this approach is that an SSIS package could take advantage of parallelism, and involve more than one machine in the effort.

Just some ideas.

Paul




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #897954
Posted Tuesday, April 6, 2010 3:41 PM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Yesterday @ 6:07 PM
Points: 11,194, Visits: 11,109
carl.anderson-1037280 (4/6/2010)
I guess I have to read up on collations now!

Worth doing, but the reason it is important here is that the sorting and comparison rules are very complex for Windows collations, much less so for SQL collations, and pretty simplistic for binary collations. See https://connect.microsoft.com/SQLServer/feedback/details/512459/replace-function-extremely-slow-with-non-binary-windows-collation

PS> I replaced the SPACE(0) with '' - it didn't seem to change performance. What's the idea behind SPACE(0)??

I just find SPACE(0) more explicit and easier to read than ''.
The difference between '' and ' ' is not always easy to spot in scripts, whereas SPACE(0) is clearly not the same as SPACE(1)




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #897956
Posted Monday, December 6, 2010 10:52 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Friday, February 4, 2011 1:39 PM
Points: 3, Visits: 12
Here's another approach:

http://iso30-sql.blogspot.com/2010/10/remove-non-printable-unicode-characters.html
Post #1030735
« Prev Topic | Next Topic »

Add to briefcase «««1234»»

Permissions Expand / Collapse