Table change Script is taking more time!!!!!!

  • Hi All,

    Need some expert advice on the below scenario.

    As per the bussiness need we are under the increasing the size of the database. The database size is 40 Gb.

    Basically , this is what is happening inside the script.

    Dropping all the constraints (including constriants i.e pk , )

    Then a series of ALTER TABLE statements to increase the size of the columns in all the tables having that column.

    Again, re-creating the indexes.

    It is taking more than 5 hours.

    In the middle we ran out of Log space.

    what i have done is, i have increased the log space to 12 Gb.

    Changed the recovery model to Simple and then started the script again.

    This time it completed within 2 hrs.

    Once everything is fine, i shrinked the log file.

    again changed the recovery model to FULL.

    The same script has to be executed against one more database which is 60gb and contains the

    same set of tables. I also changed the recovery model to SIMPLE for minimal logging, but for this

    it is taking more than 5 hrs.

    What should i do to boost the performance?

    do you want to increase the size of tempdb also? for creating Primary Keys( all the data inside the table should be sorted).

    Now i have few questions

    1. Dropping the constraints would not take much time. Am i right since i have changed the recovery model to SIMPLE.

    2. Creating the constraints would require more space/capacity planning in mdf and ldf for creation of constraints / indexes

    How can i estimate the space so that i can allocate in before hand so that i can avoid the autogrowth?

    3. recreation of primary keys will create clusterred index which would take place in tempdb.

    i have made enough room for this also. tempdb SIZE : 5442.87 MB , tempdb SPACE AVAILABLE : 5373.34 MB which shouldn't be

    a problem while creating the indexes. But why it is taking more time?

    4. enusured no other connections are open except me.

    5. Avaiable RAM on that machine is 2 GB.

    Any other aspects, do i need to consider where i missed out????

    Looking for sincere replies.

    Thanks in Advance!

  • Is the 60b database on the same server as the 40GB db? as the time could vary depending on the server configuration on which the database resides.

    The_SQL_DBA
    MCTS

    "Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."

  • Or....the same set of tables in 60GB db have much more rows than those in your 40GB db?

  • yes. the 60 GB database is also resides on the same instance.

  • Basically, both the databases are involved in the replication.

    There are some common table which are being replicated from a different database from a different sql server/instance.

  • Hi guys,

    I have got an important point to figure out here. It is taking hell out of time.

    I stopped the script execution.

    I found few front end applications are being connected to server. So killed all those connections.

    Next, i also stopped the SQL Server Agent and ensured no jobs are running.

    I tested one alter script with NOCHECK option, and am trying to add a PRIMARY KEY WHICH IS AGAIN A CLUSTTERED.

    EXAMPLE)

    ALTER TABLE [DBA].[TNAME]

    WITH NOCHECK

    ADD CONSTRAINT (PK_ID) PRIMAY KEY CLUSTERED

    (ID,

    DATEPERIOID,

    SECID,

    ELEMENTID,

    INSTID,

    RACID

    ) WITH FILLFACTOR = 90 ON PRIMARY.

    This is the statement i have it.

    This enabling the constraint is taking 40 mins and i have clearly observed that.

    To reduce the logging in the log file , before running the script i have changed the recovery model to "SIMPLE" rather

    going for "FULL"

    Here are quick questions.

    1. I have observd that 2 crore data is there in the table.

    IT took 40 mins.

    My question is , i am just adding the Primary constraint over here with clustered index with NOCHECK that means

    no data check shud happen to the exisiting data and should create the constraint. But why it is taking so much long time.

    I am exucting the space monitoring script for temp db and the database which am executing the alter script.

    I can see there is lot of free space avaiable in all the data files as well tempdb as well.

    and while executing, i can see a small MB size changes in db1log.ldf nothing more than that.

    Tempdb also looks good!!!

    System configuration

    tempdb.mdf 3GB

    tempdb.ldf 2 gb

    db1_01.mdf 10gb

    db1_02.ndf 10gb

    db1_03.ndf 10gb

    db1_04.ndf 10gb

    db1_05.ndf 10gb

    db1log.ldf 12GB

    I have executed to monitor the space whether anything is happening are not (i.e. any index creation ) or any thing.

    But it is taking hell lot of time.

    USE tempdb

    go

    SELECTName, Filename,

    CONVERT(Decimal(15,2),ROUND(a.Size/128.000,2)) [Currently Allocated Space (MB)],

    CONVERT(Decimal(15,2),ROUND(FILEPROPERTY(a.Name,'SpaceUsed')/128.000,2)) AS [Space Used (MB)],

    CONVERT(Decimal(15,2),ROUND((a.Size-FILEPROPERTY(a.Name,'SpaceUsed'))/128.000,2)) AS [Available Space (MB)]

    FROM dbo.sysfiles a (NOLOCK)

    go

    use db1

    go

    SELECTName, Filename,

    CONVERT(Decimal(15,2),ROUND(a.Size/128.000,2)) [Currently Allocated Space (MB)],

    CONVERT(Decimal(15,2),ROUND(FILEPROPERTY(a.Name,'SpaceUsed')/128.000,2)) AS [Space Used (MB)],

    CONVERT(Decimal(15,2),ROUND((a.Size-FILEPROPERTY(a.Name,'SpaceUsed'))/128.000,2)) AS [Available Space (MB)]

    FROM dbo.sysfiles a (NOLOCK)

    go

    My question why it is taking so much time,

    Question 1 : Even though i have'nt specified the WITH CHECK option, does SQL Server is doing the row by row constriant check?

    Question 2 : When it is creating ( i believe ) in the above case , does it re-arraning the 2 - crore data in the data file as it is a clusutered index.?

    Any re-organization is being done inside the data files (PRIMARY ) in this case.?????

    Please , am looking forward experts exchange thoughts!!!!!!!!!!!

    It took the whole to change / run the scripts still the scripting is being executed.

    Can anybody explain what is happening inside sql server in such scenario????????

    Pl bear with my language. Am totally frustrated !!

    Thanks in Advance

  • Here one of the sql statement which is taking more time.

    ALTER TABLE TNAME ALTER COLUMN VERSIONID CHAR(10);

    Table contains 4 crore data in it.

    It took me 40 minutes to execute the above statement.

    Before running , i made the database to SIMPLE recovery model and executed.

    I have used the below script , to monitor which files in the file group are getting modified and found chnages being

    done to .LDF file and took 40 mins.

    USE tempdb

    go

    SELECT Name, Filename,

    CONVERT(Decimal(15,2),ROUND(a.Size/128.000,2)) [Currently Allocated Space (MB)],

    CONVERT(Decimal(15,2),ROUND(FILEPROPERTY(a.Name,'SpaceUsed')/128.000,2)) AS [Space Used (MB)],

    CONVERT(Decimal(15,2),ROUND((a.Size-FILEPROPERTY(a.Name,'SpaceUsed'))/128.000,2)) AS [Available Space (MB)]

    FROM dbo.sysfiles a (NOLOCK)

    go

    use db1

    go

    SELECT Name, Filename,

    CONVERT(Decimal(15,2),ROUND(a.Size/128.000,2)) [Currently Allocated Space (MB)],

    CONVERT(Decimal(15,2),ROUND(FILEPROPERTY(a.Name,'SpaceUsed')/128.000,2)) AS [Space Used (MB)],

    CONVERT(Decimal(15,2),ROUND((a.Size-FILEPROPERTY(a.Name,'SpaceUsed'))/128.000,2)) AS [Available Space (MB)]

    FROM dbo.sysfiles a (NOLOCK)

    go

    1. My question is if the Recovery model is set to "FULL" then how much does it take ? it should take more time. right?

    pl correct me , if have done wrong!

    2. Also, i want to know what is written into the Log ? Does it writing each and every change to the LOG as like in FULL, only thing when it reaches threshold value

    say that is 70 - 80 % then it is overwriting the log file Right ??? Please suggest !!

    Rather than SIMPLE , can i go for Bulk Logged model so that i can reduce the writes to my .LDF file?

    I am saying this because even though, i loose the data i am having the script to me to again ALTER.

    SO, will the Bulk Logged recovery model reduce my Time ???????

  • If you run your efforts in one script without explicit transactions then it is all ONE (implicit) transaction. This means that even in SIMPLE mode the tlog will grow until EVERYTHING is done (which can be a tremendous amount given what you are trying to do). I recommend using explicit transactions around smaller parts of the script, implementing error checking/rollbacks if necessary, and perhaps even explicit log backups (perhaps truncate_only is all you need) to keep the tlog from exploding.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • True!

    I will split the script into small parts which gives us more control.

    Thank You.

  • I would create copies of the tables to be altered using SELECT INTO and name the new tables with "NEW_" in front of old name. Inside a Transaction Truncate each table, which is minimally logged, then run ALTER TABLE statements. On an empty table the alter will be instantaneous. Then fill the altered table from the "NEW_" copy using:

    INSERT INTO TableName1 (col1, col2,col3)

    SELECT col1, col2,col3

    FROM NEW_TableName1

    do a rowcount of original table at start and end of transaction. If they are not equal then ROLL BaCK.

  • Absolutley right!

    Let me revise the script and see what it takes.

    Thank you so much.

  • hi

    In my opinion the best would be creating a new table with the changed column and just transfering the data there. Then renaming the new and old table. Then create all constraints. If table is created correctly drop old table.

    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    Do not reinvent the wheel.
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

  • "ALTER TABLE TNAME ALTER COLUMN VERSIONID CHAR(10);

    Table contains 4 crore data in it.

    It took me 40 minutes to execute the above statement."

    The reason for this is that sql is adding a new column on the table and transfering the old data to the new table and then marking the old col inactive. When you do the index/contraint create it will drop the col that is inactive.

    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    Do not reinvent the wheel.
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

  • Super thanks!

  • tvantonder-992012 (1/15/2010)


    hi

    In my opinion the best would be creating a new table with the changed column and just transfering the data there. Then renaming the new and old table. Then create all constraints. If table is created correctly drop old table.

    I agree except I'd use SELECT/INTO to also create the table. Done correctly, it will be minimally logged and comparatively very fast.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply