Archiving Table Data

  • We are creating a data archive plan for one oof our big table which have more than 8 million records in jusst 8 months . Planning to keep only 6 months record in this and to move rest to another table in another DB

    The advantage in moving to another DB I see is that we ccan reduce the size of existing DB and also the backup size of existing DB will reduce. Do we have a better plan?

    My concerns are both insert into new table and delete from existing table is logged operation which will cause log to grow. Have to do a shrink file to reclaim the space, How can I avoid or better do this?

    Thanks

  • Rechana Rajan (10/23/2016)


    We are creating a data archive plan for one oof our big table which have more than 8 million records in jusst 8 months . Planning to keep only 6 months record in this and to move rest to another table in another DB

    The advantage in moving to another DB I see is that we ccan reduce the size of existing DB and also the backup size of existing DB will reduce. Do we have a better plan?

    My concerns are both insert into new table and delete from existing table is logged operation which will cause log to grow. Have to do a shrink file to reclaim the space, How can I avoid or better do this?

    Thanks

    If you run the archiving process frequently enough then the sizes are manageable, 1 million rows per month are only 1370 rows an hour on average, suggest you try to set it up to run hourly.

    😎

  • Eirikur Eiriksson (10/23/2016)


    Rechana Rajan (10/23/2016)


    We are creating a data archive plan for one oof our big table which have more than 8 million records in jusst 8 months . Planning to keep only 6 months record in this and to move rest to another table in another DB

    The advantage in moving to another DB I see is that we ccan reduce the size of existing DB and also the backup size of existing DB will reduce. Do we have a better plan?

    My concerns are both insert into new table and delete from existing table is logged operation which will cause log to grow. Have to do a shrink file to reclaim the space, How can I avoid or better do this?

    Thanks

    If you run the archiving process frequently enough then the sizes are manageable, 1 million rows per month are only 1370 rows an hour on average, suggest you try to set it up to run hourly.

    😎

    Thanks Eirikur. So you think the plan is good? Also how to reclaim space?

  • Rechana Rajan (10/23/2016)


    Eirikur Eiriksson (10/23/2016)


    Rechana Rajan (10/23/2016)


    We are creating a data archive plan for one oof our big table which have more than 8 million records in jusst 8 months . Planning to keep only 6 months record in this and to move rest to another table in another DB

    The advantage in moving to another DB I see is that we ccan reduce the size of existing DB and also the backup size of existing DB will reduce. Do we have a better plan?

    My concerns are both insert into new table and delete from existing table is logged operation which will cause log to grow. Have to do a shrink file to reclaim the space, How can I avoid or better do this?

    Thanks

    If you run the archiving process frequently enough then the sizes are manageable, 1 million rows per month are only 1370 rows an hour on average, suggest you try to set it up to run hourly.

    😎

    Thanks Eirikur. So you think the plan is good? Also how to reclaim space?

    Yes, I think this will do the job with minimum complexity.

    😎

    Reclaiming the space, if necessary, depends on few factors such as the complexity of the schema, number of tables, file and file group setup etc. As a "general" rule, create a new file and file group, move all the user data to the newly created file and then truncate the original file.

  • Eirikur Eiriksson (10/23/2016)


    Rechana Rajan (10/23/2016)


    We are creating a data archive plan for one oof our big table which have more than 8 million records in jusst 8 months . Planning to keep only 6 months record in this and to move rest to another table in another DB

    The advantage in moving to another DB I see is that we ccan reduce the size of existing DB and also the backup size of existing DB will reduce. Do we have a better plan?

    My concerns are both insert into new table and delete from existing table is logged operation which will cause log to grow. Have to do a shrink file to reclaim the space, How can I avoid or better do this?

    Thanks

    If you run the archiving process frequently enough then the sizes are manageable, 1 million rows per month are only 1370 rows an hour on average, suggest you try to set it up to run hourly.

    😎

    Why would you try to reclaim space from the log file? If you reclaim the space, it'll just need the space again when the archiving process runs again next time. If you shrink the log again, it'll be needed again. This will continue in perpetuity and will only serve to slow things down while the log file grows to where it needs to be.

    Instead, I'd suggest letting the log file grow to what it needs and leave it alone. When you run your log backups, the space in the log will be able to be reused, so the log file won't have to continually grow.

  • Ed Wagner (10/23/2016)


    Eirikur Eiriksson (10/23/2016)


    Rechana Rajan (10/23/2016)


    We are creating a data archive plan for one oof our big table which have more than 8 million records in jusst 8 months . Planning to keep only 6 months record in this and to move rest to another table in another DB

    The advantage in moving to another DB I see is that we ccan reduce the size of existing DB and also the backup size of existing DB will reduce. Do we have a better plan?

    My concerns are both insert into new table and delete from existing table is logged operation which will cause log to grow. Have to do a shrink file to reclaim the space, How can I avoid or better do this?

    Thanks

    If you run the archiving process frequently enough then the sizes are manageable, 1 million rows per month are only 1370 rows an hour on average, suggest you try to set it up to run hourly.

    😎

    Why would you try to reclaim space from the log file? If you reclaim the space, it'll just need the space again when the archiving process runs again next time. If you shrink the log again, it'll be needed again. This will continue in perpetuity and will only serve to slow things down while the log file grows to where it needs to be.

    Instead, I'd suggest letting the log file grow to what it needs and leave it alone. When you run your log backups, the space in the log will be able to be reused, so the log file won't have to continually grow.

    Don't think the OP is referring to the log but the data file, agreed that reclaiming the log space isn't too clever.

    😎

  • Note that if you decide to shrink the main database after pruning data that evolution will cause EXTREME FRAGMENTATION of all your objects!!! You will need to leave free space in the database and then do a full reindex operation on everything.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Eirikur Eiriksson (10/23/2016)


    Ed Wagner (10/23/2016)


    Eirikur Eiriksson (10/23/2016)


    Rechana Rajan (10/23/2016)


    We are creating a data archive plan for one oof our big table which have more than 8 million records in jusst 8 months . Planning to keep only 6 months record in this and to move rest to another table in another DB

    The advantage in moving to another DB I see is that we ccan reduce the size of existing DB and also the backup size of existing DB will reduce. Do we have a better plan?

    My concerns are both insert into new table and delete from existing table is logged operation which will cause log to grow. Have to do a shrink file to reclaim the space, How can I avoid or better do this?

    Thanks

    If you run the archiving process frequently enough then the sizes are manageable, 1 million rows per month are only 1370 rows an hour on average, suggest you try to set it up to run hourly.

    😎

    Why would you try to reclaim space from the log file? If you reclaim the space, it'll just need the space again when the archiving process runs again next time. If you shrink the log again, it'll be needed again. This will continue in perpetuity and will only serve to slow things down while the log file grows to where it needs to be.

    Instead, I'd suggest letting the log file grow to what it needs and leave it alone. When you run your log backups, the space in the log will be able to be reused, so the log file won't have to continually grow.

    Don't think the OP is referring to the log but the data file, agreed that reclaiming the log space isn't too clever.

    😎

    yes, meant the data file

  • TheSQLGuru (10/23/2016)


    Note that if you decide to shrink the main database after pruning data that evolution will cause EXTREME FRAGMENTATION of all your objects!!! You will need to leave free space in the database and then do a full reindex operation on everything.

    Thanks Kevin.

    Yes, this is one of my concern. Shrink will be one time activity and i hope subsequent defrag job will make things better.

    If the delete happens every one hour(almost 1500 rows) this is going to cause fragmentation right?

    what if i didnt shrink the data file? Will SQL use that space later?

  • Rechana Rajan (10/23/2016)


    TheSQLGuru (10/23/2016)


    Note that if you decide to shrink the main database after pruning data that evolution will cause EXTREME FRAGMENTATION of all your objects!!! You will need to leave free space in the database and then do a full reindex operation on everything.

    Thanks Kevin.

    Yes, this is one of my concern. Shrink will be one time activity and i hope subsequent defrag job will make things better.

    If the delete happens every one hour(almost 1500 rows) this is going to cause fragmentation right?

    what if i didnt shrink the data file? Will SQL use that space later?

    1) Whether DELETEs cause fragmentation is a matter of table data type choices and indexing. But routine index defrag mx would deal with that regardless if you choose to do it.

    2) Unless your big table is a HEAP, then deleting data from it will release that space back for use. And if it is a HEAP you can reclaim the space by either doing a TABLOCK with the DELETEs or a proper index MX operation or by creating a clustered index.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Rechana Rajan (10/23/2016)


    Eirikur Eiriksson (10/23/2016)


    Ed Wagner (10/23/2016)


    Eirikur Eiriksson (10/23/2016)


    Rechana Rajan (10/23/2016)


    We are creating a data archive plan for one oof our big table which have more than 8 million records in jusst 8 months . Planning to keep only 6 months record in this and to move rest to another table in another DB

    The advantage in moving to another DB I see is that we ccan reduce the size of existing DB and also the backup size of existing DB will reduce. Do we have a better plan?

    My concerns are both insert into new table and delete from existing table is logged operation which will cause log to grow. Have to do a shrink file to reclaim the space, How can I avoid or better do this?

    Thanks

    If you run the archiving process frequently enough then the sizes are manageable, 1 million rows per month are only 1370 rows an hour on average, suggest you try to set it up to run hourly.

    😎

    Why would you try to reclaim space from the log file? If you reclaim the space, it'll just need the space again when the archiving process runs again next time. If you shrink the log again, it'll be needed again. This will continue in perpetuity and will only serve to slow things down while the log file grows to where it needs to be.

    Instead, I'd suggest letting the log file grow to what it needs and leave it alone. When you run your log backups, the space in the log will be able to be reused, so the log file won't have to continually grow.

    Don't think the OP is referring to the log but the data file, agreed that reclaiming the log space isn't too clever.

    😎

    yes, meant the data file

    My mistake. I thought you were asking about both the data file and the log file separately.

  • Ed Wagner (10/23/2016)


    Rechana Rajan (10/23/2016)


    Eirikur Eiriksson (10/23/2016)


    Ed Wagner (10/23/2016)


    Eirikur Eiriksson (10/23/2016)


    Rechana Rajan (10/23/2016)


    We are creating a data archive plan for one oof our big table which have more than 8 million records in jusst 8 months . Planning to keep only 6 months record in this and to move rest to another table in another DB

    The advantage in moving to another DB I see is that we ccan reduce the size of existing DB and also the backup size of existing DB will reduce. Do we have a better plan?

    My concerns are both insert into new table and delete from existing table is logged operation which will cause log to grow. Have to do a shrink file to reclaim the space, How can I avoid or better do this?

    Thanks

    If you run the archiving process frequently enough then the sizes are manageable, 1 million rows per month are only 1370 rows an hour on average, suggest you try to set it up to run hourly.

    😎

    Why would you try to reclaim space from the log file? If you reclaim the space, it'll just need the space again when the archiving process runs again next time. If you shrink the log again, it'll be needed again. This will continue in perpetuity and will only serve to slow things down while the log file grows to where it needs to be.

    Instead, I'd suggest letting the log file grow to what it needs and leave it alone. When you run your log backups, the space in the log will be able to be reused, so the log file won't have to continually grow.

    Don't think the OP is referring to the log but the data file, agreed that reclaiming the log space isn't too clever.

    😎

    yes, meant the data file

    My mistake. I thought you were asking about both the data file and the log file separately.

    Sorry Ed, I meant both as well as when the delete run initially it deletes almost 2GB of data and as per my understanding everything will gets logged which causes log file to grow. Later runs are every 1 hour and deletes about 1000 records on average.

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply