April 5, 2011 at 12:53 pm
We implemented an insert/update AFTER trigger on a heavily updated table (Inventory) and something strange happened.
We have a heavily called stored procedure that reads from the Inventory table suddenly took 30 secs to complete instead of 1 sec. Running the same stored procedure on SSMS captured from Profiler took only 1 sec. Rerunning the same stored procedure with the same parameters again from the browser (.Net middle tier) took 30 secs. Recompiling the stored procedure didn't help.
When I disable the trigger, then the problem went away. Stored procedure completed in 1 sec on both SSMS and calling from the browser. Re-enable the trigger somehow did not reproduce that problem.
We just upgraded to SP2 for SSQL 2008 last week. Did anyone experience this similar problem?
When a trigger is deploy, I understand that it puts a schema lock on the table. Based on what we saw, I guess it is safe to assume that schema lock would have caused all stored procedures to be recompiled?
Secondly could you think of a reason why the same stored procedure called from the browser vs SSMS would yield different performance? I only know of one, which is different ANSI setting between them which is what I am trying to verify.
Does anyone know how to find out ANSI setting for a connection? I forgot, and I cannot find it on Profiler. Any help is much appreciated, thanks so much.
April 5, 2011 at 1:57 pm
Hi! I think I got to the bottom of this problem.
I've confirmed that deploying a trigger would recompile stored procedures that touch the underlying table. Thus it was a simple case of SQL Server generating a bad query plan. And the difference between running from the browser and SSMS was due to different ANSI setting, specifically ARITHABORT.
Now we just need to be careful with our deployment, which is more process oriented, and that's a totally different issue. Thanks.
Viewing 2 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply