Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

Performance Issue: Execution Plan different (involves cursors) Expand / Collapse
Author
Message
Posted Thursday, December 27, 2012 9:36 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Thursday, August 7, 2014 7:15 AM
Points: 125, Visits: 408
On our production server SQL 2008 R2 10.50.4000 our ERP system runs this query (nothing we can do about the cursors)

declare @p1 int
set @p1=NULL
declare @p3 int
set @p3=16
declare @p4 int
set @p4=1
declare @p5 int
set @p5=1
exec sp_cursoropen @p1 output,N'SELECT *,DATALENGTH("Picture") FROM "dbname"."dbo"."Item_table"
WITH (READUNCOMMITTED, INDEX("Item_table$0"))
WHERE "No_">=@P1 ORDER BY "No_" ', @p3 output,@p4 output,@p5 output,N'@P1 varchar(20)','AAA111'
select @p1, @p3, @p4, @p5

Our ERP system coding is the same between test and live (I'm told) and we cannot change it. The index hint is referring to the clustered key.
On the production server this is taking 1 minute and 3.7 million reads, but on the production server it takes 6 milliseconds and 4 reads. The data is very close to being the same.

The execution plans are almost the same (although the fast one has higher CPU and I/O costs).
The quick one is this
Clustered Index Insert (tempdb primary key) 93% <- Compute Scalar <- Compute Scalar <- Clustered Index Seek (7%)

The slow one is
Clustered Index Insert (tempdb primary key) 93% <- Sequence Project (Compute Scalar) <- Seqment <- Compute Scalar <- Clustered Index Seek (7%)

If I run the query direclty within SSMS I have to change @P3 from 16 to 28688 as 16 gives me the error
Msg 137, Level 15, State 2, Line 2
Must declare the scalar variable "@P1".
Msg 16945, Level 16, State 2, Procedure sp_cursoropen, Line 1
The cursor was not declared.

When I change P3 so it runs, it runs perfectly, returning the 1 record as per in @P5

Does anyone have an idea as to what the difference means and why it is much slower, it looks like it is trying to capture the entire recordset into the cursor and not just the 1 record.
Post #1400680
Posted Thursday, December 27, 2012 4:16 PM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Sunday, August 31, 2014 1:00 AM
Points: 464, Visits: 1,036
sotn (12/27/2012)
On our production server SQL 2008 R2 10.50.4000 our ERP system runs this query (nothing we can do about the cursors)

declare @p1 int
set @p1=NULL
declare @p3 int
set @p3=16
declare @p4 int
set @p4=1
declare @p5 int
set @p5=1
exec sp_cursoropen @p1 output,N'SELECT *,DATALENGTH("Picture") FROM "dbname"."dbo"."Item_table"
WITH (READUNCOMMITTED, INDEX("Item_table$0"))
WHERE "No_">=@P1 ORDER BY "No_" ', @p3 output,@p4 output,@p5 output,N'@P1 varchar(20)','AAA111'
select @p1, @p3, @p4, @p5

Our ERP system coding is the same between test and live (I'm told) and we cannot change it. The index hint is referring to the clustered key.
On the production server this is taking 1 minute and 3.7 million reads, but on the production server it takes 6 milliseconds and 4 reads. The data is very close to being the same.

The execution plans are almost the same (although the fast one has higher CPU and I/O costs).
The quick one is this
Clustered Index Insert (tempdb primary key) 93% <- Compute Scalar <- Compute Scalar <- Clustered Index Seek (7%)

The slow one is
Clustered Index Insert (tempdb primary key) 93% <- Sequence Project (Compute Scalar) <- Seqment <- Compute Scalar <- Clustered Index Seek (7%)

If I run the query direclty within SSMS I have to change @P3 from 16 to 28688 as 16 gives me the error
Msg 137, Level 15, State 2, Line 2
Must declare the scalar variable "@P1".
Msg 16945, Level 16, State 2, Procedure sp_cursoropen, Line 1
The cursor was not declared.

When I change P3 so it runs, it runs perfectly, returning the 1 record as per in @P5

Does anyone have an idea as to what the difference means and why it is much slower, it looks like it is trying to capture the entire recordset into the cursor and not just the 1 record.



As you said everything is similar including execuion plans,data and server configurations then it should run perfectly same when it has also the same user activity and utilization of the resources.






--SQLFRNDZ
Post #1400752
Posted Friday, December 28, 2012 2:23 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Thursday, August 7, 2014 7:15 AM
Points: 125, Visits: 408
If we rebuild the indexes on the table, then all is well again but we are having to do this every day, to 'handle' the problem.
Wedo have overnight tasks to modify certain fields (not key fields) in the table.
Next time, I will check the statistics, but as the correct index is used/forced (the primary key), I'm not sure where exactly to look further.

Post #1400840
Posted Friday, December 28, 2012 5:36 AM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Today @ 8:39 AM
Points: 4,360, Visits: 6,198
Rebuilding index gets you a 100% refresh of statistics. I would be willing to bet (without additional information) that this is a stale statistics issue whereby the optimizer doesn't "know" about a particular parameter value due to stats being out of date and it "guesses" and gets a bad plan (scan from the sound of it).

Make a job to nightly refresh stats on that one table and see if that helps. Use Ola.Hallengren.com's AWESOME maintenance suite for this. VERY amazing, FLEXIBLE, DOCUMENTED and FREE script suite for handling all your maintenance needs.

BTW, I have some clients that run stats updates as frequently as every hour on some critical tables. The auto-update rule of 20% modified rows just doesn't cut it in MANY MANY cases!! Statistics are the core underpinnings of the optimization process, and if they aren't valid per current data, nothing good will happen with your queries!


Best,

Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru at GMail
Post #1400881
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse