Blog Post

Using a UDF as a SARG. Make a hash of it!

,

I am somewhat uncomfortable blogging this, as I feel that it falls into “Polish a turd” category.  Having said that, it is a particularly ugly turd and it does polish up rather well.  Though as the saying goes, it is still a turd.

The evils and inadequacies of UDF (User Defined Functions) in SqlServer are well documented, but there is another scenario using UDF’s that will cause a lot of pain if not carefully controlled.

A performance question on Sql Server Central on joining on UDF’s got me thinking about how sqlserver joins sets of data when the SARG’s are using UDF’s.  This lead to two blog posts by Conor Cunningham (1, 2) and I would urge you to read those. 

In short there are three methods to join data, loop, hash and merge, I’m going to ignore merge for the rest of this post and concentrate on loop and hash.

Consider the following SQL Code :

Code Snippet
  1. Drop function Func1
  2. go
  3. create function Func1(@id integer)
  4.      returns varchar(510)
  5. as
  6. begin
  7.    return cast(@id as varchar(510))--Func1
  8. end
  9. go
  10. Drop function Func2
  11. go
  12. create function Func2(@id integer)
  13.      returns varchar(510)
  14. as
  15. begin
  16.    return cast(@id as varchar(510))--Func2
  17. end
  18. go
  19. drop table #x1
  20. go
  21. Create table #x1
  22. (
  23.   Id integer
  24. )
  25. go
  26. insert into #x1(Id)
  27. select top(10) ROW_NUMBER() over (order by (select null))
  28.   from sys.columns
  29. go
  30.   select* from #x1 a join #x1 b on dbo.Func1(a.id) = dbo.Func2(b.id)
  31. option(recompile)

How much thought would be given to the mechanics of how the data is joined together?  My guess is not a lot and rightly so. In most scenarios, we trust the optimizer to do its job and generally speaking it does it well.  The major problem with UDF’s is the optimizer is unaware of how much work is involved inside it.  All it knows is that it will return 1 variable and its datatype.  The effort of calculating that, be it string parsing, mathematical equations or more SQL code, it does not consider.  This can have a dramatic consequence if a loop join is chosen to process the query.


In the above select statement, how many executions of the functions “Func1” and “Func2” would you expect?

Here’s the execution plan.

clip_image002[5]

No real clue there, but if you run profiler with a Stored Procedure statement trace you will find that both functions were executed 100 times!  Because we have 10 rows in our source table (#x1) it’s very easy to deduce why 100.  For each possible combination, the functions have both been executed, (10 * 10 =100).  You can do the math if we had 1,000 rows on each side.

So what’s the solution, how can we lower the amount of calls to our function ?  Adding ‘With schemabinding’ causes a slightly different plan, but doesn’t lower the overall quantity of calls to the functions.

What we must do is force a hash join.

select  * from #x1 a inner hash join #x1 b on dbo.Func1(a.id) = dbo.Func2(b.id)

option(recompile)

That results in the below execution plan.

clip_image002[7]

Now, if we look at our profiler trace we will see each function has only been called 10 times. 

Turd Polished.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating