Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««56789»»

Truncate rollback Expand / Collapse
Author
Message
Posted Friday, April 23, 2010 1:58 PM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Today @ 5:21 AM
Points: 6,098, Visits: 8,367
Brandie Tarvin (4/23/2010)
[quote]SwaroopRaj (4/23/2010)
Truncates are *minimally logged* (EDIT: in ALL database Recovery models), like having your database in Bulk-Logged Recovery Model. That means there are pointers to the pages of the just removed data that can yank that stuff back if needed.


If I understand all the mexchanisc correct, then the actual TRUNCATE is done by deallocating entire pages. The log file will contain only the fact that page so-and-so was deallocated, but the log backup (if one is taken) will also include a copy of that page. So the pages that were deallocated are not available for reuse until the tran log has been backed up.

Deletes are logged more than Truncates, (EDIT: being fully logged in FULL mode down to the row) even in Bulk-Logged mode, because I think (and I could be wrong here) the pointers are more finite, pointing to the actually extents instead of the pages.


This is not quite correct. A DELETE processes individual rows. Each row deleted gets an entry in the log file, in ALL recovery models (even simple - otherwise, SQL Server would be unable to rollback or to recover after a crash). And all those entries are also written to the log backup.

So, yes, a Truncate can absolutely be rolled back. In fact, I'd be hard pressed to say what data change (not schema change) couldn't be rolled back at all.

One that has already been committed?
Seriously, I agree. In fact, even most schema changes can be rolled back.



Hugo Kornelis, SQL Server MVP
Visit my SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis
Post #909800
Posted Friday, April 23, 2010 2:01 PM


SSC-Insane

SSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-Insane

Group: General Forum Members
Last Login: Today @ 4:23 AM
Points: 20,584, Visits: 9,624
Wow 72 posts for this "little" question. I can offically say this the best QOTD I ever posted .
Post #909802
Posted Friday, April 23, 2010 3:29 PM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Yesterday @ 10:23 AM
Points: 2,630, Visits: 4,041
Steve Jones - Editor (4/22/2010)
I think this was worded a little poorly, and I didn't catch the insert issue. I ran the code on 2008, it worked, I let the question go. I thought the rollback/truncate was tricky enough to be worth 2 points.

I have added 2008 to the question header, as well as noted in the answer for error, "error on the last SELECT".

The debate is interesting here, but for those of you that say that the question isn't fair because it's 2008 specific, 2008 isn't even the current version today. SQL Server 2008 R2 is. I would think that after a year and a half, that you would expect that 2008 is the subject of most questions.

SQL 2000 is EOL, SQL 2005 is getting close to a complete end of support (it's 2010), regardless of what's in *your* environment, consider 2008 to be the standard.


That's fair. How about stating that all questions must be based on SQL 2008?
Post #909841
Posted Friday, April 23, 2010 4:32 PM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Thursday, September 25, 2014 12:38 PM
Points: 1,385, Visits: 1,249
Hugo Kornelis (4/23/2010)

The log file will contain only the fact that page so-and-so was deallocated, but the log backup (if one is taken) will also include a copy of that page.

I'm not sure whether I'm misunderstanding you, but to my mind this makes no sense (and does not correspond to my practical experience):
- Use a DB in a test environment where you can mess with the data and backups, and let's assume the DB is set to is fully logged.
- Fill a table with a couple GBs of data (using your favorite data-generation method)
- Checkpoint, just to be safe
- Backup the transaction log (ignore this backup file - if you like you can use WITH NO_LOG/TRUNCATE_ONLY, we don't need log chain continuity)
- Shrink the transaction log
-> the transaction log is down to a few MB in size
- Truncate the large table with all that test data.
- Checkpoint, just to be safe
- Back up the transaction log
-> take a look at the size of the transaction log backup... a few MB in size?

I must admit I have not followed these explicit steps in preparation for this post, but does anyone expect behaviour different from this? (does anyone expect the transaction log backup to contain copies of the deallocated pages??)


http://poorsql.com for T-SQL formatting: free as in speech, free as in beer, free to run in SSMS or on your version control server - free however you want it.
Post #909854
Posted Friday, April 23, 2010 5:22 PM


SSC-Insane

SSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-Insane

Group: General Forum Members
Last Login: Today @ 4:23 AM
Points: 20,584, Visits: 9,624
Tao Klerks (4/23/2010)
Hugo Kornelis (4/23/2010)

The log file will contain only the fact that page so-and-so was deallocated, but the log backup (if one is taken) will also include a copy of that page.

I'm not sure whether I'm misunderstanding you, but to my mind this makes no sense (and does not correspond to my practical experience):
- Use a DB in a test environment where you can mess with the data and backups, and let's assume the DB is set to is fully logged.
- Fill a table with a couple GBs of data (using your favorite data-generation method)
- Checkpoint, just to be safe
- Backup the transaction log (ignore this backup file - if you like you can use WITH NO_LOG/TRUNCATE_ONLY, we don't need log chain continuity)
- Shrink the transaction log
-> the transaction log is down to a few MB in size
- Truncate the large table with all that test data.
- Checkpoint, just to be safe
- Back up the transaction log
-> take a look at the size of the transaction log backup... a few MB in size?

I must admit I have not followed these explicit steps in preparation for this post, but does anyone expect behaviour different from this? (does anyone expect the transaction log backup to contain copies of the deallocated pages??)



That's an interesting theory, care to script it out and prove it?
Post #909865
Posted Friday, April 23, 2010 7:42 PM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Thursday, September 25, 2014 12:38 PM
Points: 1,385, Visits: 1,249
Ninja's_RGR'us (4/23/2010)
That's an interesting theory, care to script it out and prove it?

I'm not sure my theory is that interesting, I suspect that I just misunderstood what Hugo was saying - but here's a script illustrating my point, anyway:

--Tiny DB created in the default folder, default collation, etc, set to 10% autogrow - terrible 
-- for performance & fragmentation, but we're doing this for testing only. Don't try this at home kids!
-- (also, don't create databases in the root of your system drive, or even allow the SQL Service account access to it!)
CREATE DATABASE SimpleTestDB
ON PRIMARY (NAME = SimpleTestDB_Data, FILENAME = 'C:\SimpleTestDB_Data.mdb', SIZE = 2, MAXSIZE = UNLIMITED, FILEGROWTH = 10%)
LOG ON (NAME = SimpleTestDB_Log, FILENAME = 'C:\SimpleTestDB_Data.ldb', SIZE = 1, MAXSIZE = UNLIMITED, FILEGROWTH = 10%)
GO

--Not sure what the default is, let's set it anyway.
ALTER DATABASE SimpleTestDB
SET RECOVERY FULL
GO
USE SimpleTestDB
GO

--Confirm the file sizes (in Pages):
SELECT name, filename, size FROM sysfiles
GO

--Quickly generate dummy data, let's use existing structures to accumulate data relatively fast
-- (Might as well use a heap, we will never query; again, not at home kids!)
-- (Took about 4 minutes to create 300 MB of data on a pretty-low-spec test server)
SET NOCOUNT ON
SELECT * INTO JunkData FROM master.dbo.sysobjects
DECLARE @DataInsertIterationCounter Int
SET @DataInsertIterationCounter = 0
WHILE @DataInsertIterationCounter < 1000
BEGIN
INSERT INTO JunkData SELECT * FROM master.dbo.sysobjects
SET @DataInsertIterationCounter = @DataInsertIterationCounter + 1
END
SET NOCOUNT OFF
GO

--Confirm the new file sizes
SELECT name, filename, size FROM sysfiles
GO

--Truncate the transaction log - (kids, you know the drill)
CHECKPOINT
BACKUP LOG SimpleTestDB WITH TRUNCATE_ONLY
GO

--Shrink the logfile so that we can see the effect of truncating the table
DBCC SHRINKFILE (SimpleTestDB_Log)
GO

--Confirm the logfile is back to being tiny:
SELECT name, filename, size FROM sysfiles
GO

--Back up the DB so that we actually can do a transaction log backup later:
BACKUP DATABASE SimpleTestDB TO DISK = 'C:\SimpleTestDB_Pre-Truncate_Full_Backup_(Junk).BAK'
GO

--Truncate the table - this is the cool bit - takes only a sec to "delete" (deallocate?) all that data!
TRUNCATE TABLE JunkData
GO

--Confirm the logfile still tiny despite the table truncation:
SELECT name, filename, size FROM sysfiles
GO

--Actually back up the transaction log
CHECKPOINT
BACKUP LOG SimpleTestDB
TO DISK = 'C:\SimpleTestDB_Post-Truncate_Log_Backup.TRN'

GO

--Check the size of the transaction log backup file
--SQL 2000 or earlier
exec xp_getfiledetails 'C:\SimpleTestDB_Post-Truncate_Log_Backup.TRN'
--OR if your server allows xp_cmdshell
exec master..xp_cmdshell 'dir c:\SimpleTestDB_Post-Truncate_Log_Backup.TRN'
--OR otherwise - go look up the size of the file :)
GO

--Clean Up
DROP DATABASE SimpleTestDB

--Remember to delete the 300-MB DB backup file and the stray transaction log backup file too! (manually, sorry, I'm not going to rely on the presence of xp_cmdshell)

The transaction log backup file is tiny - the fact that the pages have been deallocated is presumably logged (a list of page references?), but the pages themselves are not backed up to the transaction log (or transaction log backup) file.


http://poorsql.com for T-SQL formatting: free as in speech, free as in beer, free to run in SSMS or on your version control server - free however you want it.
Post #909899
Posted Saturday, April 24, 2010 1:55 AM


SSCrazy Eights

SSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy Eights

Group: General Forum Members
Last Login: Yesterday @ 6:23 AM
Points: 9,928, Visits: 11,196
Always good to see this question - the myth that TRUNCATE TABLE is non-logged (and so cannot be rolled back) is a persistent one.

Quite clever using 2008-only syntax too - which largely defeats the 'run it then answer' crowd.

Complaining that the INSERT syntax is invalid sounds like sour grapes to me




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #909929
Posted Saturday, April 24, 2010 2:36 AM


SSCrazy Eights

SSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy Eights

Group: General Forum Members
Last Login: Yesterday @ 6:23 AM
Points: 9,928, Visits: 11,196
Tao Klerks (4/23/2010)
Hugo Kornelis (4/23/2010)

The log file will contain only the fact that page so-and-so was deallocated, but the log backup (if one is taken) will also include a copy of that page.

I'm not sure whether I'm misunderstanding you, but to my mind this makes no sense (and does not correspond to my practical experience)

I believe Hugo is confusing the logging behaviour of TRUNCATE TABLE with the behaviour of minimally-logged data changes under the BULK_LOGGED recovery model.

The allocation unit deallocations performed by TRUNCATE TABLE (whether or not these are deferred and performed asynchronously on a background thread) do not change data - so BCM bits are not set, and the affected pages are not included in the next log backup.

All that needs to be logged for full recoverability is the fact that the allocation units were deallocated. See Tracking Modified Extents for details of how SQL Server uses the Bulk Changed Map, and the impact on transaction log backups.

Paul




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #909934
Posted Saturday, April 24, 2010 4:52 AM


SSC-Insane

SSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-Insane

Group: General Forum Members
Last Login: Today @ 4:23 AM
Points: 20,584, Visits: 9,624
Paul White NZ (4/24/2010)
Always good to see this question - the myth that TRUNCATE TABLE is non-logged (and so cannot be rolled back) is a persistent one.

Quite clever using 2008-only syntax too - which largely defeats the 'run it then answer' crowd.

Complaining that the INSERT syntax is invalid sounds like sour grapes to me



Ya this made it like a 1-2 punch in this one... I feel that most people who complained about it had a chance to learn 2 things (and yes I do see your POV). Now I can't do a darn thing if they didn't learn and just want to whine about it . Anyhow I still feel this question gave the intended results... make peope think & learn and start a nice conversation about the topic.
Post #909943
Posted Saturday, April 24, 2010 4:53 AM


SSC-Insane

SSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-Insane

Group: General Forum Members
Last Login: Today @ 4:23 AM
Points: 20,584, Visits: 9,624
Tao Klerks (4/23/2010)
Ninja's_RGR'us (4/23/2010)
That's an interesting theory, care to script it out and prove it?

I'm not sure my theory is that interesting, I suspect that I just misunderstood what Hugo was saying - but here's a script illustrating my point, anyway:

--Tiny DB created in the default folder, default collation, etc, set to 10% autogrow - terrible 
-- for performance & fragmentation, but we're doing this for testing only. Don't try this at home kids!
-- (also, don't create databases in the root of your system drive, or even allow the SQL Service account access to it!)
CREATE DATABASE SimpleTestDB
ON PRIMARY (NAME = SimpleTestDB_Data, FILENAME = 'C:\SimpleTestDB_Data.mdb', SIZE = 2, MAXSIZE = UNLIMITED, FILEGROWTH = 10%)
LOG ON (NAME = SimpleTestDB_Log, FILENAME = 'C:\SimpleTestDB_Data.ldb', SIZE = 1, MAXSIZE = UNLIMITED, FILEGROWTH = 10%)
GO

--Not sure what the default is, let's set it anyway.
ALTER DATABASE SimpleTestDB
SET RECOVERY FULL
GO
USE SimpleTestDB
GO

--Confirm the file sizes (in Pages):
SELECT name, filename, size FROM sysfiles
GO

--Quickly generate dummy data, let's use existing structures to accumulate data relatively fast
-- (Might as well use a heap, we will never query; again, not at home kids!)
-- (Took about 4 minutes to create 300 MB of data on a pretty-low-spec test server)
SET NOCOUNT ON
SELECT * INTO JunkData FROM master.dbo.sysobjects
DECLARE @DataInsertIterationCounter Int
SET @DataInsertIterationCounter = 0
WHILE @DataInsertIterationCounter < 1000
BEGIN
INSERT INTO JunkData SELECT * FROM master.dbo.sysobjects
SET @DataInsertIterationCounter = @DataInsertIterationCounter + 1
END
SET NOCOUNT OFF
GO

--Confirm the new file sizes
SELECT name, filename, size FROM sysfiles
GO

--Truncate the transaction log - (kids, you know the drill)
CHECKPOINT
BACKUP LOG SimpleTestDB WITH TRUNCATE_ONLY
GO

--Shrink the logfile so that we can see the effect of truncating the table
DBCC SHRINKFILE (SimpleTestDB_Log)
GO

--Confirm the logfile is back to being tiny:
SELECT name, filename, size FROM sysfiles
GO

--Back up the DB so that we actually can do a transaction log backup later:
BACKUP DATABASE SimpleTestDB TO DISK = 'C:\SimpleTestDB_Pre-Truncate_Full_Backup_(Junk).BAK'
GO

--Truncate the table - this is the cool bit - takes only a sec to "delete" (deallocate?) all that data!
TRUNCATE TABLE JunkData
GO

--Confirm the logfile still tiny despite the table truncation:
SELECT name, filename, size FROM sysfiles
GO

--Actually back up the transaction log
CHECKPOINT
BACKUP LOG SimpleTestDB
TO DISK = 'C:\SimpleTestDB_Post-Truncate_Log_Backup.TRN'

GO

--Check the size of the transaction log backup file
--SQL 2000 or earlier
exec xp_getfiledetails 'C:\SimpleTestDB_Post-Truncate_Log_Backup.TRN'
--OR if your server allows xp_cmdshell
exec master..xp_cmdshell 'dir c:\SimpleTestDB_Post-Truncate_Log_Backup.TRN'
--OR otherwise - go look up the size of the file :)
GO

--Clean Up
DROP DATABASE SimpleTestDB

--Remember to delete the 300-MB DB backup file and the stray transaction log backup file too! (manually, sorry, I'm not going to rely on the presence of xp_cmdshell)

The transaction log backup file is tiny - the fact that the pages have been deallocated is presumably logged (a list of page references?), but the pages themselves are not backed up to the transaction log (or transaction log backup) file.



I think I'm missing your point. Where are you rolling back the changes after the tlog backup?
Post #909944
« Prev Topic | Next Topic »

Add to briefcase «««56789»»

Permissions Expand / Collapse