Large Transaction - GO Statement Inside?

  • Hi

    I am confused by the GO statement. I've been advised that to reduce the memory buffer when dealing with thousands of inserts/updates/deletes in a ONE LARGE TRANSACTION it would be best to place GO statements every 2000 lines (depending on circumstances). However, my understanding of GO is that it seems to COMMIT data and therefore renders a ROLLBACK requirement impossible.

    Can anyone help me with this?

    Thanks

  • Try it for your self:

    create table foo(bar int)

    GO

    select * from foo

    Begin transaction

    Insert into foo select 99

    Select * from foo

    GO

    Select * from foo

    Rollback transaction

    Select * from foo

    GO

    Select * from foo

    GO

    Drop table foo

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • The GO statement is used by osql and isql to logically break a SQL script into batches that it can send to the SQL Server. Each chunk of code between GO statements consists of a batch. There is good info on this in BOL. Using GO does not commit the transaction. Consider this example:

    CREATE TABLE GOtest (RowID int PRIMARY KEY, Value varchar(10))

    INSERT INTO GOTest

    SELECT 1, 'One' UNION ALL

    SELECT 2, 'Two'

    SET NOCOUNT ON

    BEGIN TRANSACTION

    UPDATE GoTest

    SET Value = 'blank'

    WHERE RowID = 2

    GO

    SELECT @@TRANCOUNT as 'Open Transactions'

    ROLLBACK

    SELECT * FROM GoTest

    DROP TABLE GoTest

    John Rowan

    ======================================================
    ======================================================
    Forum Etiquette: How to post data/code on a forum to get the best help[/url] - by Jeff Moden

  • You're quick Barry. I checked for other responses just before clicking the Post Reply button.

    John Rowan

    ======================================================
    ======================================================
    Forum Etiquette: How to post data/code on a forum to get the best help[/url] - by Jeff Moden

  • GO is just a command to the Client SQL program (Query Analyzer, SSMS, etc.) to terminate the current batch and execute it. Go does not terminate the current session or process and transactions are session-scoped entities. So, transactions are not COMMITted when a GO is encountered, as the Previous script demonstrates.

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • Heh, you too.

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • Thanks for the information.

    OK - So, if I have a large transaction that consists of multiple different stored procedure calls that output a return value as per the below but image thousands of these calls I have an issue that the system memory is insufficient on the server. I am considering putting GO statements every 1000 lines or so to try and reduce the memory used - would this be a good idea? or is there a better solution to reduce the memory - the script is being called by OSQL. The Stored procedures do not have their own transactions

    set nocount on

    set xact_abort on

    declare @BeginTranCount int

    declare @Ret_Status int

    declare @StorProc varchar(1000)

    declare @StatementNo int

    set @BeginTranCount = @@TRANCOUNT

    set @Ret_Status = 0

    set @StorProc = ''

    set @StatementNo = 0

    BEGIN TRANSACTION

    EXEC @Ret_Status = CDI_PROCESS_1 9999, 'Test1', 'AnotherTest1', 'And Yet Another1', 'P', 0, 1

    IF coalesce(nullif(@Ret_Status, 0), @@error) <> 0 BEGIN SET @StorProc = 'CDI_PROCESS_1' SET @StatementNo = 1 GOTO Error_Handler END

    EXEC @Ret_Status = CDI_PROCESS_2 8888, 'Test2', 'AnotherTest2', 'And Yet Another2', 'P', 1, 0

    IF coalesce(nullif(@Ret_Status, 0), @@error) <> 0 BEGIN SET @StorProc = 'CDI_PROCESS_2' SET @StatementNo = 1 GOTO Error_Handler END

    IF @@TRANCOUNT > @BeginTranCount

    BEGIN

    COMMIT TRANSACTION

    PRINT 'Successful Update!'

    GOTO Update_Complete

    END

    Error_Handler:

    ROLLBACK TRANSACTION

    PRINT 'Update Failed! on StoredProcedure ' + @StorProc + ' Statement Number:- ' + cast(@StatementNo as varchar)

    RAISERROR ('Script aborted by StoredProcedure - Failed to process data!.', 16, 127)

    Update_Complete:

    GO

  • OK - I've now come up with this which seems to work well, though I do have a few questions that I hope someone might be able to answer.

    1. The RETURN statement appears to be doing nothing in this scenario - I know that it is normally used in stor procs, however, I believed it would still stop progressing the script?

    2. Will the GO statement when called by OSQL (process in batches) assist in reducing the memory stored in the buffer memory before commit to the target system? As a result I will lose elegant error trapping for debugging purposes but I must assist the memory drain as a priority.

    3. The last @@Error statement check is in fact not doing anything so I might as well remove it?

    4. I have multiple large transactions similar to below to be processed on a target system - would it be a good idea to Execute DBCC DROPCLEANBUFFERS, DBCC FREEPROCCACHE and CHECKPOINT statements between each transaction to again assist in memory reduction and help to prevent the insufficient memory error message?

    use Test

    go

    set nocount on

    set xact_abort on

    declare @Ret_Status int

    set @Ret_Status = 0

    BEGIN TRANSACTION

    EXEC @Ret_Status = CDI_PROCESS_1 9999, 'Test1', 'Update', 'UPDATE', 'P', 0, 1

    IF coalesce(nullif(@Ret_Status, 0), @@error) <> 0 BEGIN ROLLBACK TRAN Raiserror('Error in Script. Aborting!', 16, 127) Return END

    GO

    declare @Ret_Status int

    EXEC @Ret_Status = CDI_PROCESS_1 8888, 'Test2', 'Insert', 'INSERT', 'P', 1, 0

    IF coalesce(nullif(@Ret_Status, 0), @@error) <> 0 BEGIN ROLLBACK TRAN Raiserror('Error in Script. Aborting!', 16, 127) Return END

    GO

    If @@error = 0 and @@TRANCOUNT > 0

    begin

    Commit Transaction

    PRINT 'Successful Update!'

    End

    Thanks

  • 1. Coorect. It's useless here. It even may produce an error in some circumstances.

    2. No.

    3. Yes.

    4. Very bad idea.

    You probably have absolutely wrong idea about SQL Server memory usege.

    Memory is used for data in transition (including temp tables, hash joins, execution plans, etc.).

    SQL Server always takes as much memory as is available and never releases it to OS (unless you restart the service).

    When there is no space for new queries data SQL Server removes oldest cached data from memory.

    DBCC DROPCLEANBUFFERS, DBCC FREEPROCCACHE will just slow down ALL QUERIES execution because all percedures will need to be recompiled and all execution plans will need to be rebuilt.

    CHECKPOINT will increase physical IO which will slow down your processing as well.

    If you want to prevent the "insufficient memory" error message you need to learn how to build effective queries. You need to stop using cursors and loops, stop loading whole lot data into variables and start use proper joins.

    Your server will be very thankful for that. 🙂

    _____________
    Code for TallyGenerator

  • Thanks Sergiy.

    Could someone explain to me why our company regularly place GO statements within a transaction when called by OSQL? There has to be a reason for this - is it definitely not the case that placing these GO statements and therefore "batching the data" would be better than awaiting the final transaction to the process all the data in one LARGE batch?

    The stor procs queries are quite efficient in that there are no cursors/loops/temp tables and very very few joins. I think the issue maybe the number of records that each of the 34 ish stor procs must process - the example above is part of a 4.5MB sql script.

    Basically I have two scripts for each update:-

    1. First drops/recreates the 34 stor procs to ensure the target system has the latest information for processing.

    2. The second script (as per the example above) then processes each RECORD for update/delete/insert. The only checks that are done in the stor procs above are to check for existence of the record prior to an insert.

    Phase 2 is the actual data update for the target system and therefore is the large transaction file.

    I am thinking that if I DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACH prior to the first script then this would ensure that the buffers are clean prior building the new stor procs and which will have to be recompiled on execution of each of the stor procs in any case.

    I then have a situation sometimes where Script1 & Script2 above for one Update is then immediately followed-up by an another update after the previous completion and therefore recompiling will occur anyway,however, I thought the executing a CHECKPOINT between each of these updates would help as it would be "OUTSIDE" of the transaction and before the subsequent large transaction.

    Thanks in advance - this is becoming urgent as we may have to revert back to the clunky delphi executable on every site to process the data in ASCII files!

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply