SQL Clone
SQLServerCentral is supported by Redgate
 
Log in  ::  Register  ::  Not logged in
 
 
 


Transaction Log Growth


Transaction Log Growth

Author
Message
Mike Scalise
Mike Scalise
SSCertifiable
SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)

Group: General Forum Members
Points: 7388 Visits: 1388
Hi,

I am currently using Ola Hallengren's backup solution for my backups on a SQL Server 2016 Standard instance. I have a full backup one day a week, differential every other day of the week, then hourly t-log backups. There are two hours each morning--6am and 7am--that the log backups are like 3000 times the size of what the t-log backups normally are. It seems to be a pattern and I'm trying to figure out what's going on in the system at these hours that's causing such large logs to be created. I have way more of a database developer background than administrator, so I'm curious to know what might be the best way to find out what's causing this. I'm not sure if there's a way to retroactively see what was happening at those times or if I have to catch the activity in the moment. There's so much information and so many suggested queries to run to diagnose, I'm not sure which to pay attention to and which are false positives. Does anyone have any ideas on a good way to approach this?

Thank you in advance,

Mike

EDIT: I should also add a few more details:

1) The diff job that runs six out of seven days of the week has the following steps:

STEP_NUMBER STEP_NAME
1 DatabaseBackup - USER_DATABASES - DIFF
2 DatabaseBackup - SYSTEM_DATABASES - FULL

2) The full backup that runs once a week has the following steps:

STEP_NUMBER STEP_NAME
1 sp_delete_backuphistory
2 sp_purge_jobhistory
3 CommandLog Cleanup
4 Output File Cleanup
5 DatabaseIntegrityCheck - USER_DATABASES
6 IndexOptimize - USER_DATABASES
7 DatabaseBackup - USER_DATABASES - FULL
8 DatabaseIntegrityCheck - SYSTEM_DATABASES
9 DatabaseBackup - SYSTEM_DATABASES - FULL

The t-log backups that are quite large happen to be in the couple of hours following these jobs. I thought that perhaps the index optimization might be causing the subsequent two backups to be large, but the index optimization is only happening on one day in the week while the two large t-log backups are happening every day. I can't rule this out completely, but it seems odd that every day I'd need to plan for two gigantic transaction log backups because of db maintenance while the others t-log backups are a much smaller and more manageable...then again, I might be way wrong and just need to expand the disk size...

Mike Scalise, PMP
https://www.michaelscalise.com
pietlinden
pietlinden
SSC Guru
SSC Guru (58K reputation)SSC Guru (58K reputation)SSC Guru (58K reputation)SSC Guru (58K reputation)SSC Guru (58K reputation)SSC Guru (58K reputation)SSC Guru (58K reputation)SSC Guru (58K reputation)

Group: General Forum Members
Points: 58028 Visits: 17939
Okay, I'm not a DBA, so take this with a grain of salt.
What if you had a table that held the size of the log over the course of the day...
CREATE TABLE LogSizeStats (
LogSize DECIMAL(10,2),
TimeCheck DATETIME
);
GO


Then executed something like this in a job that ran every N minutes...
INSERT INTO LogSizeStats (LogSize, TimeCheck)
SELECT
GETDATE() AS LogTime
,(total_log_size_in_bytes - used_log_space_in_bytes)*1.0/1024/1024 AS [free log space in MB]
FROM sys.dm_db_log_space_usage;


Then you'd have to correlate that with what's going on on your server. What's running when - using Profiler or something like it.(Can this be done with Extended Events?)

Then you'd just correlate the growth spikes with stored procedures etc are running just before the spike happens.
Jeff Moden
Jeff Moden
SSC Guru
SSC Guru (938K reputation)SSC Guru (938K reputation)SSC Guru (938K reputation)SSC Guru (938K reputation)SSC Guru (938K reputation)SSC Guru (938K reputation)SSC Guru (938K reputation)SSC Guru (938K reputation)

Group: General Forum Members
Points: 938240 Visits: 49124
Mike Scalise - Sunday, February 18, 2018 1:11 PM
Hi,

I am currently using Ola Hallengren's backup solution for my backups on a SQL Server 2016 Standard instance. I have a full backup one day a week, differential every other day of the week, then hourly t-log backups. There are two hours each morning--6am and 7am--that the log backups are like 3000 times the size of what the t-log backups normally are. It seems to be a pattern and I'm trying to figure out what's going on in the system at these hours that's causing such large logs to be created. I have way more of a database developer background than administrator, so I'm curious to know what might be the best way to find out what's causing this. I'm not sure if there's a way to retroactively see what was happening at those times or if I have to catch the activity in the moment. There's so much information and so many suggested queries to run to diagnose, I'm not sure which to pay attention to and which are false positives. Does anyone have any ideas on a good way to approach this?

Thank you in advance,

Mike

EDIT: I should also add a few more details:

1) The diff job that runs six out of seven days of the week has the following steps:

STEP_NUMBER STEP_NAME
1 DatabaseBackup - USER_DATABASES - DIFF
2 DatabaseBackup - SYSTEM_DATABASES - FULL

2) The full backup that runs once a week has the following steps:

STEP_NUMBER STEP_NAME
1 sp_delete_backuphistory
2 sp_purge_jobhistory
3 CommandLog Cleanup
4 Output File Cleanup
5 DatabaseIntegrityCheck - USER_DATABASES
6 IndexOptimize - USER_DATABASES
7 DatabaseBackup - USER_DATABASES - FULL
8 DatabaseIntegrityCheck - SYSTEM_DATABASES
9 DatabaseBackup - SYSTEM_DATABASES - FULL

The t-log backups that are quite large happen to be in the couple of hours following these jobs. I thought that perhaps the index optimization might be causing the subsequent two backups to be large, but the index optimization is only happening on one day in the week while the two large t-log backups are happening every day. I can't rule this out completely, but it seems odd that every day I'd need to plan for two gigantic transaction log backups because of db maintenance while the others t-log backups are a much smaller and more manageable...then again, I might be way wrong and just need to expand the disk size...


Have you checked the SQL Server Agent jobs to see if any other jobs are running at those times?

--Jeff Moden

RBAR is pronounced ree-bar and is a Modenism for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
If you think its expensive to hire a professional to do the job, wait until you hire an amateur. -- Red Adair

When you put the right degree of spin on it, the number 318 is also a glyph that describes the nature of a DBAs job. Wink

Helpful Links:
How to post code problems
How to post performance problems
Forum FAQs
Mike Scalise
Mike Scalise
SSCertifiable
SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)

Group: General Forum Members
Points: 7388 Visits: 1388
pietlinden - Sunday, February 18, 2018 2:36 PM
Okay, I'm not a DBA, so take this with a grain of salt.
What if you had a table that held the size of the log over the course of the day...
CREATE TABLE LogSizeStats (
LogSize DECIMAL(10,2),
TimeCheck DATETIME
);
GO


Then executed something like this in a job that ran every N minutes...
INSERT INTO LogSizeStats (LogSize, TimeCheck)
SELECT
GETDATE() AS LogTime
,(total_log_size_in_bytes - used_log_space_in_bytes)*1.0/1024/1024 AS [free log space in MB]
FROM sys.dm_db_log_space_usage;


Then you'd have to correlate that with what's going on on your server. What's running when - using Profiler or something like it.(Can this be done with Extended Events?)

Then you'd just correlate the growth spikes with stored procedures etc are running just before the spike happens.

Thanks for the suggestion! I'm actually thinking of doing something like this with sp_WhoIsActive...running it every minute or so and logging activity to help get a look into what's going on at a given period of time...


Mike Scalise, PMP
https://www.michaelscalise.com
Mike Scalise
Mike Scalise
SSCertifiable
SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)

Group: General Forum Members
Points: 7388 Visits: 1388
Jeff Moden - Sunday, February 18, 2018 4:30 PM
Mike Scalise - Sunday, February 18, 2018 1:11 PM
Hi,

I am currently using Ola Hallengren's backup solution for my backups on a SQL Server 2016 Standard instance. I have a full backup one day a week, differential every other day of the week, then hourly t-log backups. There are two hours each morning--6am and 7am--that the log backups are like 3000 times the size of what the t-log backups normally are. It seems to be a pattern and I'm trying to figure out what's going on in the system at these hours that's causing such large logs to be created. I have way more of a database developer background than administrator, so I'm curious to know what might be the best way to find out what's causing this. I'm not sure if there's a way to retroactively see what was happening at those times or if I have to catch the activity in the moment. There's so much information and so many suggested queries to run to diagnose, I'm not sure which to pay attention to and which are false positives. Does anyone have any ideas on a good way to approach this?

Thank you in advance,

Mike

EDIT: I should also add a few more details:

1) The diff job that runs six out of seven days of the week has the following steps:

STEP_NUMBER STEP_NAME
1 DatabaseBackup - USER_DATABASES - DIFF
2 DatabaseBackup - SYSTEM_DATABASES - FULL

2) The full backup that runs once a week has the following steps:

STEP_NUMBER STEP_NAME
1 sp_delete_backuphistory
2 sp_purge_jobhistory
3 CommandLog Cleanup
4 Output File Cleanup
5 DatabaseIntegrityCheck - USER_DATABASES
6 IndexOptimize - USER_DATABASES
7 DatabaseBackup - USER_DATABASES - FULL
8 DatabaseIntegrityCheck - SYSTEM_DATABASES
9 DatabaseBackup - SYSTEM_DATABASES - FULL

The t-log backups that are quite large happen to be in the couple of hours following these jobs. I thought that perhaps the index optimization might be causing the subsequent two backups to be large, but the index optimization is only happening on one day in the week while the two large t-log backups are happening every day. I can't rule this out completely, but it seems odd that every day I'd need to plan for two gigantic transaction log backups because of db maintenance while the others t-log backups are a much smaller and more manageable...then again, I might be way wrong and just need to expand the disk size...


Have you checked the SQL Server Agent jobs to see if any other jobs are running at those times?

Jeff,

I have looked at the other jobs and unfortunately there isn't another one that's running at the same time (or close to the same time). I do have to alter my original statement, though:

"...two large t-log backups are happening every day"

It looks like the two large t-logs are happening after the integrity check and index re-orgs/re-builds on the one day a week that I do those things and not every day. So I'm wondering if it points more to those operations than anything else... If that's the case, it doesn't seem there'd be anything to do. It's not that I'd want to omit those steps or anything, and maybe I just have to live with the fact that I'll have two gigantic t-log backups each week, followed by a ton of normal sized ones...


Mike Scalise, PMP
https://www.michaelscalise.com
Jeff Moden
Jeff Moden
SSC Guru
SSC Guru (938K reputation)SSC Guru (938K reputation)SSC Guru (938K reputation)SSC Guru (938K reputation)SSC Guru (938K reputation)SSC Guru (938K reputation)SSC Guru (938K reputation)SSC Guru (938K reputation)

Group: General Forum Members
Points: 938240 Visits: 49124
Mike Scalise - Monday, February 19, 2018 12:52 PM
Jeff Moden - Sunday, February 18, 2018 4:30 PM
Mike Scalise - Sunday, February 18, 2018 1:11 PM
Hi,

I am currently using Ola Hallengren's backup solution for my backups on a SQL Server 2016 Standard instance. I have a full backup one day a week, differential every other day of the week, then hourly t-log backups. There are two hours each morning--6am and 7am--that the log backups are like 3000 times the size of what the t-log backups normally are. It seems to be a pattern and I'm trying to figure out what's going on in the system at these hours that's causing such large logs to be created. I have way more of a database developer background than administrator, so I'm curious to know what might be the best way to find out what's causing this. I'm not sure if there's a way to retroactively see what was happening at those times or if I have to catch the activity in the moment. There's so much information and so many suggested queries to run to diagnose, I'm not sure which to pay attention to and which are false positives. Does anyone have any ideas on a good way to approach this?

Thank you in advance,

Mike

EDIT: I should also add a few more details:

1) The diff job that runs six out of seven days of the week has the following steps:

STEP_NUMBER STEP_NAME
1 DatabaseBackup - USER_DATABASES - DIFF
2 DatabaseBackup - SYSTEM_DATABASES - FULL

2) The full backup that runs once a week has the following steps:

STEP_NUMBER STEP_NAME
1 sp_delete_backuphistory
2 sp_purge_jobhistory
3 CommandLog Cleanup
4 Output File Cleanup
5 DatabaseIntegrityCheck - USER_DATABASES
6 IndexOptimize - USER_DATABASES
7 DatabaseBackup - USER_DATABASES - FULL
8 DatabaseIntegrityCheck - SYSTEM_DATABASES
9 DatabaseBackup - SYSTEM_DATABASES - FULL

The t-log backups that are quite large happen to be in the couple of hours following these jobs. I thought that perhaps the index optimization might be causing the subsequent two backups to be large, but the index optimization is only happening on one day in the week while the two large t-log backups are happening every day. I can't rule this out completely, but it seems odd that every day I'd need to plan for two gigantic transaction log backups because of db maintenance while the others t-log backups are a much smaller and more manageable...then again, I might be way wrong and just need to expand the disk size...


Have you checked the SQL Server Agent jobs to see if any other jobs are running at those times?

Jeff,

I have looked at the other jobs and unfortunately there isn't another one that's running at the same time (or close to the same time). I do have to alter my original statement, though:

"...two large t-log backups are happening every day"

It looks like the two large t-logs are happening after the integrity check and index re-orgs/re-builds on the one day a week that I do those things and not every day. So I'm wondering if it points more to those operations than anything else... If that's the case, it doesn't seem there'd be anything to do. It's not that I'd want to omit those steps or anything, and maybe I just have to live with the fact that I'll have two gigantic t-log backups each week, followed by a ton of normal sized ones...


Index Reorgs are an absolute log pig no matter the Recovery Model. Index rebuilds provide a double whammy in the Full Recovery model because, for any index over 128 extents (that's just 8MB), the old index will stay in place while the new one rebuilds unless you get tricky a bit plus the rebuilds are fully logged in the Full Recovery model.

--Jeff Moden

RBAR is pronounced ree-bar and is a Modenism for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
If you think its expensive to hire a professional to do the job, wait until you hire an amateur. -- Red Adair

When you put the right degree of spin on it, the number 318 is also a glyph that describes the nature of a DBAs job. Wink

Helpful Links:
How to post code problems
How to post performance problems
Forum FAQs
Chris Harshman
Chris Harshman
SSC-Dedicated
SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)

Group: General Forum Members
Points: 39556 Visits: 7341
Mike Scalise - Monday, February 19, 2018 12:47 PM
pietlinden - Sunday, February 18, 2018 2:36 PM
Then you'd have to correlate that with what's going on on your server. What's running when - using Profiler or something like it.(Can this be done with Extended Events?)

Then you'd just correlate the growth spikes with stored procedures etc are running just before the spike happens.

Thanks for the suggestion! I'm actually thinking of doing something like this with sp_WhoIsActive...running it every minute or so and logging activity to help get a look into what's going on at a given period of time...

Are the transaction log files growing during these times or are they always big and just filling up more over these hours? If the transaction log files are growing, that information actually gets captured in the default trace (EventClass 92 = Data File, 93 = Log File), which you can see like this:
DECLARE @path NVARCHAR(260);
SELECT @path = REVERSE(SUBSTRING(REVERSE([path]), CHARINDEX('\', REVERSE([path])), 260)) + N'log.trc'
FROM sys.traces
WHERE is_default = 1;

SELECT td.DatabaseName, td.Filename, te.name AS Event, (IntegerData*8)/1024 AS Change_MB, td.StartTime, td.EndTime,
td.LoginName, td.HostName, td.ApplicationName, td.spid, td.ClientProcessID, td.IsSystem, td.SqlHandle, td.TextData
FROM sys.fn_trace_gettable(@path, DEFAULT) td
INNER JOIN sys.trace_events te ON td.EventClass = te.trace_event_id
WHERE td.EventClass IN (92,93)
ORDER BY td.StartTime;

Mike Scalise
Mike Scalise
SSCertifiable
SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)

Group: General Forum Members
Points: 7388 Visits: 1388
Jeff Moden - Monday, February 19, 2018 1:18 PM
Mike Scalise - Monday, February 19, 2018 12:52 PM
Jeff Moden - Sunday, February 18, 2018 4:30 PM
Mike Scalise - Sunday, February 18, 2018 1:11 PM
Hi,

I am currently using Ola Hallengren's backup solution for my backups on a SQL Server 2016 Standard instance. I have a full backup one day a week, differential every other day of the week, then hourly t-log backups. There are two hours each morning--6am and 7am--that the log backups are like 3000 times the size of what the t-log backups normally are. It seems to be a pattern and I'm trying to figure out what's going on in the system at these hours that's causing such large logs to be created. I have way more of a database developer background than administrator, so I'm curious to know what might be the best way to find out what's causing this. I'm not sure if there's a way to retroactively see what was happening at those times or if I have to catch the activity in the moment. There's so much information and so many suggested queries to run to diagnose, I'm not sure which to pay attention to and which are false positives. Does anyone have any ideas on a good way to approach this?

Thank you in advance,

Mike

EDIT: I should also add a few more details:

1) The diff job that runs six out of seven days of the week has the following steps:

STEP_NUMBER STEP_NAME
1 DatabaseBackup - USER_DATABASES - DIFF
2 DatabaseBackup - SYSTEM_DATABASES - FULL

2) The full backup that runs once a week has the following steps:

STEP_NUMBER STEP_NAME
1 sp_delete_backuphistory
2 sp_purge_jobhistory
3 CommandLog Cleanup
4 Output File Cleanup
5 DatabaseIntegrityCheck - USER_DATABASES
6 IndexOptimize - USER_DATABASES
7 DatabaseBackup - USER_DATABASES - FULL
8 DatabaseIntegrityCheck - SYSTEM_DATABASES
9 DatabaseBackup - SYSTEM_DATABASES - FULL

The t-log backups that are quite large happen to be in the couple of hours following these jobs. I thought that perhaps the index optimization might be causing the subsequent two backups to be large, but the index optimization is only happening on one day in the week while the two large t-log backups are happening every day. I can't rule this out completely, but it seems odd that every day I'd need to plan for two gigantic transaction log backups because of db maintenance while the others t-log backups are a much smaller and more manageable...then again, I might be way wrong and just need to expand the disk size...


Have you checked the SQL Server Agent jobs to see if any other jobs are running at those times?

Jeff,

I have looked at the other jobs and unfortunately there isn't another one that's running at the same time (or close to the same time). I do have to alter my original statement, though:

"...two large t-log backups are happening every day"

It looks like the two large t-logs are happening after the integrity check and index re-orgs/re-builds on the one day a week that I do those things and not every day. So I'm wondering if it points more to those operations than anything else... If that's the case, it doesn't seem there'd be anything to do. It's not that I'd want to omit those steps or anything, and maybe I just have to live with the fact that I'll have two gigantic t-log backups each week, followed by a ton of normal sized ones...


Index Reorgs are an absolute log pig no matter the Recovery Model. Index rebuilds provide a double whammy in the Full Recovery model because, for any index over 128 extents (that's just 8MB), the old index will stay in place while the new one rebuilds unless you get tricky a bit plus the rebuilds are fully logged in the Full Recovery model.

Jeff,

Thanks for the information. Are you referring to online index rebuilds as the double whammy? If so, isn't that only an Enterprise feature? Regardless, are the old ones deleted automatically once the new ones are created? I know you said it's all fully logged, so is there any good way to prevent some of this extra activity from bloating the t-log backups? I know you said there's ways to do some trickery, but I'm just really wanting to do what makes sense and is safe...If there's nothing more to do, then so be it...

Thanks,

Mike


Mike Scalise, PMP
https://www.michaelscalise.com
Mike Scalise
Mike Scalise
SSCertifiable
SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)SSCertifiable (7.4K reputation)

Group: General Forum Members
Points: 7388 Visits: 1388
Chris Harshman - Monday, February 19, 2018 2:00 PM
Mike Scalise - Monday, February 19, 2018 12:47 PM
pietlinden - Sunday, February 18, 2018 2:36 PM
Then you'd have to correlate that with what's going on on your server. What's running when - using Profiler or something like it.(Can this be done with Extended Events?)

Then you'd just correlate the growth spikes with stored procedures etc are running just before the spike happens.

Thanks for the suggestion! I'm actually thinking of doing something like this with sp_WhoIsActive...running it every minute or so and logging activity to help get a look into what's going on at a given period of time...

Are the transaction log files growing during these times or are they always big and just filling up more over these hours? If the transaction log files are growing, that information actually gets captured in the default trace (EventClass 92 = Data File, 93 = Log File), which you can see like this:
DECLARE @path NVARCHAR(260);
SELECT @path = REVERSE(SUBSTRING(REVERSE([path]), CHARINDEX('\', REVERSE([path])), 260)) + N'log.trc'
FROM sys.traces
WHERE is_default = 1;

SELECT td.DatabaseName, td.Filename, te.name AS Event, (IntegerData*8)/1024 AS Change_MB, td.StartTime, td.EndTime,
td.LoginName, td.HostName, td.ApplicationName, td.spid, td.ClientProcessID, td.IsSystem, td.SqlHandle, td.TextData
FROM sys.fn_trace_gettable(@path, DEFAULT) td
INNER JOIN sys.trace_events te ON td.EventClass = te.trace_event_id
WHERE td.EventClass IN (92,93)
ORDER BY td.StartTime;

Chris,

Thank you. This is very interesting. Here are my results for the database in question. I'm not exactly sure how to interpret this. Can you help me understand what it indicates?

DatabaseName Filename Event Change_MB StartTime EndTime
mydatabase mydatabase_log01 Log File Auto Grow 2149 2018-02-09 23:22:16.750 2018-02-09 23:22:44.800
mydatabase mydatabase_log01 Log File Auto Grow 2364 2018-02-11 05:17:02.333 2018-02-11 05:17:37.237
mydatabase mydatabase_log01 Log File Auto Grow 2601 2018-02-11 05:41:17.990 2018-02-11 05:41:54.767
mydatabase mydatabase_log01 Log File Auto Grow 2861 2018-02-11 05:54:54.140 2018-02-11 05:55:34.870
mydatabase mydatabase_log01 Log File Auto Grow 3147 2018-02-11 06:12:02.763 2018-02-11 06:12:48.667
mydatabase mydatabase_log01 Log File Auto Grow 3462 2018-02-11 06:35:04.557 2018-02-11 06:35:54.093
mydatabase mydatabase_log01 Log File Auto Grow 3808 2018-02-12 23:24:10.850 2018-02-12 23:25:04.240
mydatabase mydatabase_log01 Log File Auto Grow 4189 2018-02-15 23:21:57.480 2018-02-15 23:22:52.953
mydatabase mydatabase_log01 Log File Auto Grow 4608 2018-02-18 05:47:27.050 2018-02-18 05:48:29.620
mydatabase mydatabase_log01 Log File Auto Grow 5069 2018-02-18 06:13:41.223 2018-02-18 06:14:55.630


Mike Scalise, PMP
https://www.michaelscalise.com
Chris Harshman
Chris Harshman
SSC-Dedicated
SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)

Group: General Forum Members
Points: 39556 Visits: 7341
Did the query return anything in the columns LoginName or ApplicationName? That would help pinpoint who or what is happening that caused the log to grow. With the results you show here, you at least have the time the log file grew, but you still would need to match it up to what was executing then. If it's a scheduled SQL Agent job, you would see something like this for ApplicationName:
SQLAgent - TSQL JobStep (Job 0x475A3D830555AE4F854CCB63761ED284 : Step 1)
And that job id can be matched to MSDB tables like this:
SELECT * FROM dbo.sysjobs 
WHERE job_id = 0x475A3D830555AE4F854CCB63761ED284

Go


Permissions

You can't post new topics.
You can't post topic replies.
You can't post new polls.
You can't post replies to polls.
You can't edit your own topics.
You can't delete your own topics.
You can't edit other topics.
You can't delete other topics.
You can't edit your own posts.
You can't edit other posts.
You can't delete your own posts.
You can't delete other posts.
You can't post events.
You can't edit your own events.
You can't edit other events.
You can't delete your own events.
You can't delete other events.
You can't send private messages.
You can't send emails.
You can read topics.
You can't vote in polls.
You can't upload attachments.
You can download attachments.
You can't post HTML code.
You can't edit HTML code.
You can't post IFCode.
You can't post JavaScript.
You can post emoticons.
You can't post or upload images.

Select a forum









































































































































































SQLServerCentral


Search