Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

Unexplained Performance Observation Expand / Collapse
Author
Message
Posted Thursday, August 22, 2013 2:16 PM


Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Thursday, August 21, 2014 3:12 PM
Points: 60, Visits: 564
This is occurring on SQL 2012 but I do not think it is specific to 2012. I have also observed this on our 2005 servers prior to our upgrade to 2012 upgrade in June.

I have a highly active session database that manages application session.
I know it is not a good practice to manage application sessions in SQL Server but for now lets ignore that.

Basically there is one extremely active table, here is the definition...

CREATE TABLE [dbo].[Session](
[SessionID] [varchar](100) NOT NULL,
[CreateDate] [datetime] NOT NULL,
[ExpireDate] [datetime] NOT NULL,
[LockDate] [datetime] NULL,
[LockID] [int] NOT NULL,
[Timeout] [int] NULL,
[Locked] [tinyint] NULL,
[SessionItems] [varchar](max) NULL,
[Flags] [int] NULL,
[MachineName] [varchar](300) NULL,
PRIMARY KEY CLUSTERED
(
[SessionID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

GO

There is one non-clustered index on ExpireDate
I have two procedures that perform massive amounts of updates to this table (each over 6 million times per day).
One update simply locks a session rows before returning information back to the caller (UpdateA):
UPDATE Session 
SET Locked = 1,
LockDate = @Now
WHERE SessionID = @SessionID
AND Locked = 0

The other physically updates session info etc (UpdateB)...
UPDATE Session 
SET ExpireDate = DATEADD(mi, @TimeoutMinutes, @Now),
SessionItems = @SessionItems,
Locked = 0,
MachineName = @MachineName
WHERE SessionID = @SessionID
AND LockID = @LockID

Both updates are executed about the same number of times per day.
Judging by the nature of the updates, I would predict with relative confidence, that UpdateB would be the more expensive update given that it is performing an EXPENSIVE update by updating a varchar MAX, a Varchar(300) and updating a non-clustered index column.
UpdateA is only updating two fields which should result in an in-place update.

We have Confio Ignite installed on our server and we have been observing the exact opposite.
The execution plans do not look bad yet we are seeing a DRASTIC difference in WriteLog waits where updateA is much worse than updateB.

Can anyone provide a logical explanation for this?
Post #1487512
Posted Thursday, August 22, 2013 10:34 PM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Yesterday @ 2:23 PM
Points: 4,350, Visits: 6,160
1) Sorry, but I cannot ignore having session state stored in SQL Server.

2) You have a varchar(100) as your clustered index. Does it look like a GUID by any chance? If not, does it have new values throughout the range of characters? If so, you are likely getting page splits all over the place during inserts. You are also carrying that fat field as the pointer on the NC index too, bloating it

3) You are very likely NOT getting in-place updates on your first update. Both fields are NULLABLE and as such when you give them a value for that update they become LARGER. Page split potential again. And for that you are moving about 4K of data to a new page.

4) All of the above is exacerbated by having a defaulted fill factor. Bad news there.

5) Have you done any analysis to determine if the second update actually increases field sizes much if at all? Review log file records to see what is really getting written there. ApexSQL has some nice capabilities for that, and there is the semi-documented fn_dblog command you can use as well.

6) I would analyze page splits and fragmentation and adjust fill factor(s) as needed.

7) Consider removing all NULLs and using appropriate DEFAULTs if you can.


Above all else I would move my session storage to an appropriate system.



Best,

Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru at GMail
Post #1487632
Posted Friday, August 23, 2013 3:41 PM


SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Thursday, August 21, 2014 3:04 PM
Points: 801, Visits: 715
TheSQLGuru (8/22/2013)
3) You are very likely NOT getting in-place updates on your first update. Both fields are NULLABLE and as such when you give them a value for that update they become LARGER.


Yes, if the table is compressed that would be true. But else not. Fixed-length data types like tinyint and datetime take up the same amount of space, no matter they are NULL or not.


Erland Sommarskog, SQL Server MVP, www.sommarskog.se
Post #1488041
Posted Monday, August 26, 2013 8:26 AM


Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Thursday, August 21, 2014 3:12 PM
Points: 60, Visits: 564
TheSQLGuru (8/22/2013)
1) Sorry, but I cannot ignore having session state stored in SQL Server.

2) You have a varchar(100) as your clustered index. Does it look like a GUID by any chance? If not, does it have new values throughout the range of characters? If so, you are likely getting page splits all over the place during inserts. You are also carrying that fat field as the pointer on the NC index too, bloating it

3) You are very likely NOT getting in-place updates on your first update. Both fields are NULLABLE and as such when you give them a value for that update they become LARGER. Page split potential again. And for that you are moving about 4K of data to a new page.

4) All of the above is exacerbated by having a defaulted fill factor. Bad news there.

5) Have you done any analysis to determine if the second update actually increases field sizes much if at all? Review log file records to see what is really getting written there. ApexSQL has some nice capabilities for that, and there is the semi-documented fn_dblog command you can use as well.

6) I would analyze page splits and fragmentation and adjust fill factor(s) as needed.

7) Consider removing all NULLs and using appropriate DEFAULTs if you can.


Above all else I would move my session storage to an appropriate system.




1 - yes...unfortunately thats the way it is for now. On the brighter side, there is a project in motion to move it.

2 - yes it is a GUID and yes we are definitely getting page spits. This table is extremely un-healthy, at it mature state it basically works at a little better than one row per page. Yes you are correct inserts are page splitting as I would expect. But, my question deals with the updates...why updateA would perform worse than updateB?

3 - Yes agreed, but these rows will be updated thousands of times before either timing out or being logged out. The null values should only be present until the first update. Every update after that should be in-place. Unless I am misunderstanding something.

4 - I'm not sure there is a good fill factor for a table of this nature. There is definitely session bloat present. 7400 bytes average per session with a max of 50,000 bytes.

5 - yes, the SessionItems is a volatile column that will fluctuate with size so my assumption is that the second update is truly an EXPENSIVE update.

6 - What would you suggest for fill factor on a table like this? 100%?

7 - This would be a good start.

Thank You for the response


Post #1488398
Posted Monday, August 26, 2013 8:59 AM


SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Thursday, August 21, 2014 3:04 PM
Points: 801, Visits: 715
[quote]Eric1/2aB (8/26/2013)
6 - What would you suggest for fill factor on a table like this? 100%?{/quote]

For indexes on GUIDs, an idea is to have a low fill factor, say 50%. New guids will fill in the holes, and reduce the amount of page split. As the pages start to get full, page splits will occur, so you should rebuild in due time.

It will mean a waste of empty space in the buffer cache, though.


Erland Sommarskog, SQL Server MVP, www.sommarskog.se
Post #1488411
Posted Monday, August 26, 2013 10:52 AM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Yesterday @ 2:23 PM
Points: 4,350, Visits: 6,160
Erland Sommarskog (8/26/2013)
[quote]Eric1/2aB (8/26/2013)
6 - What would you suggest for fill factor on a table like this? 100%?{/quote]

For indexes on GUIDs, an idea is to have a low fill factor, say 50%. New guids will fill in the holes, and reduce the amount of page split. As the pages start to get full, page splits will occur, so you should rebuild in due time.

It will mean a waste of empty space in the buffer cache, though.


I concur with Erland. I usually use somewhere between 50 and 70% fill factor for indexes at clients that lead off with a non-sequential GUID, depending on the index defrag interval and how fragmented stuff gets between intervals. I like to see you just hitting 5 or maybe 10% frag when you hit your index mx run. That seems to give a balance between having pages start out (and spend too much time) too empty and splitting pages/extents all over the place (with the corresponding tlog activities and other negative frag effects).

Speaking of sequential GUIDs, some systems can be configured to create them on the generating server. I HIGHLY recommend seeing if your systems can do that. HUGE win on many fronts if you can avoid full-range GUID values. Having 3 or 10 places where ranges split is way better than the alternative I think.


Best,

Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru at GMail
Post #1488454
Posted Monday, August 26, 2013 11:46 AM


Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Thursday, August 21, 2014 3:12 PM
Points: 60, Visits: 564
Erland and Kevin...this is great information.
I will definitely look into the fill factor.
Sequential GUIDs definitely would make sense.
I am in the process of pushing Session off of SQL Server so hopefully I do not have to resort to a full change of the their (application developers) session application.
Thank you guys for the input
Post #1488473
Posted Tuesday, August 27, 2013 1:23 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Yesterday @ 4:03 PM
Points: 36,983, Visits: 31,508
BrerSQL (8/26/2013)

Sequential GUIDs definitely would make sense.

I'm not so sure that's true. If the computer is rebooted, the start of a new sequence might not be > the largest guid already in the table. I think you could end up with massive page splits anyway.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1488584
Posted Tuesday, August 27, 2013 7:38 AM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Yesterday @ 2:23 PM
Points: 4,350, Visits: 6,160
Jeff Moden (8/27/2013)
BrerSQL (8/26/2013)

Sequential GUIDs definitely would make sense.

I'm not so sure that's true. If the computer is rebooted, the start of a new sequence might not be > the largest guid already in the table. I think you could end up with massive page splits anyway.


Yes, but still at just a single point in the tree for the newly rebooted server. Still much better than the alternative in my experience.


Best,

Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru at GMail
Post #1488771
Posted Tuesday, August 27, 2013 10:10 AM


Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Thursday, August 21, 2014 3:12 PM
Points: 60, Visits: 564
Jeff Moden (8/27/2013)
BrerSQL (8/26/2013)

Sequential GUIDs definitely would make sense.

I'm not so sure that's true. If the computer is re-booted, the start of a new sequence might not be > the largest guid already in the table. I think you could end up with massive page splits anyway.


Keep in mind this is an Application Session DB.
If the server is re-booted and we experience a planned or unplanned outage as a result, then all rows in this table are useless as this data maintains current application session state. We could truncate the table on startup to off set this issue.
This particular server should rarely go down.

Actually on second thought, FILL FACTOR may not work very well for a table like this unless we perform frequent INDEX REBUILDS.

Note: We have found a VLF issue for this database which could be causing our WRITELOG latency issue. The VLF issue was prety severe. I am in the process of rebuilding this DB with an optimal number/Size of VLF's for this database. I am hoping that we may see an improvement, at least until I can move session off of the server.
Post #1488850
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse