platform: MSSQL 2008R2
I'm facing a performance problem using the merge statement, described below:
set transaction isolation level read uncommitted
insert into dbo.history --table containing history of data changes (20M of rows; quite a few indexes defined upon it)
(pool, ticket, oid, old_value, new_value, [userId], eCode, TCode)
[pool], ticket, 1360, isnull(cast(old_value as varchar),''), cast(new_Value as varchar), 1, 'Merge', 'test' --I guess cast may also be a problem
dbo.table2 AS target_table
inner join dbo.table3 T3 on T3.ticket = vm.ticket and T3.Form = 11 and T3.isDeleted = 0
vm.Inserted = 1) as source_table (ticket, num) --source data select contains 27K records, performs super fast when run separately
(target_table.ticket = source_table.ticket) --destination table contains unique index on columns ticket
when matched then update set OValue = num
when not matched then insert ([pool], ticket, OValue) values (23, ticket, num)
output inserted.[pool], inserted.ticket, deleted.num, inserted.num)
as spr([pool], ticket, old_value, new_Value);
Numbers: 27K of rows having been either inserted or updated + 27K of rows having been inserted into history table.
The operation takes almost 5 minutes ?? to complete. I've checked for indexes & statistics and they all seem to be in shape. No missing indexes can be found.
Hardware problem? Don't think so. Is a tough and resiliant piece of equipment.
Anwhow; has anyone got an idea why the statement performs so poorly? I know it's hard to give a piece of advice with so little to start with. I can't stop thinking that I've missed the point along the way - something in general, regarding the merge statement rules. I don't know. I've read the MS article about merge statement recommendations.
Thanks in advance