I am somewhat uncomfortable blogging this, as I feel that it falls into “Polish a turd” category. Having said that, it is a particularly ugly turd and it does polish up rather well. Though as the saying goes, it is still a turd.
The evils and inadequacies of UDF (User Defined Functions) in SqlServer are well documented, but there is another scenario using UDF’s that will cause a lot of pain if not carefully controlled.
A performance question on Sql Server Central on joining on UDF’s got me thinking about how sqlserver joins sets of data when the SARG’s are using UDF’s. This lead to two blog posts by Conor Cunningham (1, 2) and I would urge you to read those.
In short there are three methods to join data, loop, hash and merge, I’m going to ignore merge for the rest of this post and concentrate on loop and hash.
Consider the following SQL Code :
- Drop function Func1
- create function Func1(@id integer)
- returns varchar(510)
- return cast(@id as varchar(510))--Func1
- Drop function Func2
- create function Func2(@id integer)
- returns varchar(510)
- return cast(@id as varchar(510))--Func2
- drop table #x1
- Create table #x1
- Id integer
- insert into #x1(Id)
- select top(10) ROW_NUMBER() over (order by (select null))
- from sys.columns
- select* from #x1 a join #x1 b on dbo.Func1(a.id) = dbo.Func2(b.id)
How much thought would be given to the mechanics of how the data is joined together? My guess is not a lot and rightly so. In most scenarios, we trust the optimizer to do its job and generally speaking it does it well. The major problem with UDF’s is the optimizer is unaware of how much work is involved inside it. All it knows is that it will return 1 variable and its datatype. The effort of calculating that, be it string parsing, mathematical equations or more SQL code, it does not consider. This can have a dramatic consequence if a loop join is chosen to process the query.
In the above select statement, how many executions of the functions “Func1” and “Func2” would you expect?
Here’s the execution plan.
No real clue there, but if you run profiler with a Stored Procedure statement trace you will find that both functions were executed 100 times! Because we have 10 rows in our source table (#x1) it’s very easy to deduce why 100. For each possible combination, the functions have both been executed, (10 * 10 =100). You can do the math if we had 1,000 rows on each side.
So what’s the solution, how can we lower the amount of calls to our function ? Adding ‘With schemabinding’ causes a slightly different plan, but doesn’t lower the overall quantity of calls to the functions.
What we must do is force a hash join.
select * from #x1 a inner hash join #x1 b on dbo.Func1(a.id) = dbo.Func2(b.id)
That results in the below execution plan.
Now, if we look at our profiler trace we will see each function has only been called 10 times.