• Tao Klerks (4/23/2010)


    Ninja's_RGR'us (4/23/2010)


    That's an interesting theory, care to script it out and prove it?

    I'm not sure my theory is that interesting, I suspect that I just misunderstood what Hugo was saying - but here's a script illustrating my point, anyway:

    --Tiny DB created in the default folder, default collation, etc, set to 10% autogrow - terrible

    -- for performance & fragmentation, but we're doing this for testing only. Don't try this at home kids!

    -- (also, don't create databases in the root of your system drive, or even allow the SQL Service account access to it!)

    CREATE DATABASE SimpleTestDB

    ON PRIMARY (NAME = SimpleTestDB_Data, FILENAME = 'C:\SimpleTestDB_Data.mdb', SIZE = 2, MAXSIZE = UNLIMITED, FILEGROWTH = 10%)

    LOG ON (NAME = SimpleTestDB_Log, FILENAME = 'C:\SimpleTestDB_Data.ldb', SIZE = 1, MAXSIZE = UNLIMITED, FILEGROWTH = 10%)

    GO

    --Not sure what the default is, let's set it anyway.

    ALTER DATABASE SimpleTestDB

    SET RECOVERY FULL

    GO

    USE SimpleTestDB

    GO

    --Confirm the file sizes (in Pages):

    SELECT name, filename, size FROM sysfiles

    GO

    --Quickly generate dummy data, let's use existing structures to accumulate data relatively fast

    -- (Might as well use a heap, we will never query; again, not at home kids!)

    -- (Took about 4 minutes to create 300 MB of data on a pretty-low-spec test server)

    SET NOCOUNT ON

    SELECT * INTO JunkData FROM master.dbo.sysobjects

    DECLARE @DataInsertIterationCounter Int

    SET @DataInsertIterationCounter = 0

    WHILE @DataInsertIterationCounter < 1000

    BEGIN

    INSERT INTO JunkData SELECT * FROM master.dbo.sysobjects

    SET @DataInsertIterationCounter = @DataInsertIterationCounter + 1

    END

    SET NOCOUNT OFF

    GO

    --Confirm the new file sizes

    SELECT name, filename, size FROM sysfiles

    GO

    --Truncate the transaction log - (kids, you know the drill)

    CHECKPOINT

    BACKUP LOG SimpleTestDB WITH TRUNCATE_ONLY

    GO

    --Shrink the logfile so that we can see the effect of truncating the table

    DBCC SHRINKFILE (SimpleTestDB_Log)

    GO

    --Confirm the logfile is back to being tiny:

    SELECT name, filename, size FROM sysfiles

    GO

    --Back up the DB so that we actually can do a transaction log backup later:

    BACKUP DATABASE SimpleTestDB TO DISK = 'C:\SimpleTestDB_Pre-Truncate_Full_Backup_(Junk).BAK'

    GO

    --Truncate the table - this is the cool bit - takes only a sec to "delete" (deallocate?) all that data!

    TRUNCATE TABLE JunkData

    GO

    --Confirm the logfile still tiny despite the table truncation:

    SELECT name, filename, size FROM sysfiles

    GO

    --Actually back up the transaction log

    CHECKPOINT

    BACKUP LOG SimpleTestDB

    TO DISK = 'C:\SimpleTestDB_Post-Truncate_Log_Backup.TRN'

    GO

    --Check the size of the transaction log backup file

    --SQL 2000 or earlier

    exec xp_getfiledetails 'C:\SimpleTestDB_Post-Truncate_Log_Backup.TRN'

    --OR if your server allows xp_cmdshell

    exec master..xp_cmdshell 'dir c:\SimpleTestDB_Post-Truncate_Log_Backup.TRN'

    --OR otherwise - go look up the size of the file :)

    GO

    --Clean Up

    DROP DATABASE SimpleTestDB

    --Remember to delete the 300-MB DB backup file and the stray transaction log backup file too! (manually, sorry, I'm not going to rely on the presence of xp_cmdshell)

    The transaction log backup file is tiny - the fact that the pages have been deallocated is presumably logged (a list of page references?), but the pages themselves are not backed up to the transaction log (or transaction log backup) file.

    I think I'm missing your point. Where are you rolling back the changes after the tlog backup?