• Lynn Pettis (5/24/2016)


    I have to agree with Steve. I would also look at the indexes you have on the tables. If you have good clustered indexes that don't need to be rebuilt

    And I have to agree with Lynn.

    Correctly defined clustered index will make the downtime required for creating it the last downtime you're gonna ever need.

    What would be the correct clustered index?

    The one which fits your requirements:

    During this down time I am archiving data older than x days to an OLAP database and then deleting it from the OLTP database

    So, there is some kind of datetime column which tells you how old is the data.

    This column must be chosen as a 1st column in your clustered index definition.

    Then new data will be always added to the tail of the index, without causing fragmentation on existing data, and expired data will be deleted from another end of the index, removing continuous blocks of data, not causing fragmentation as well.

    With correctly defined clustered index you may wish even remove outdated data continuously, around o'clock, by small chunks, say by 100 rows at the time. A job running every minute may select top 100 records which meet "outdated" definition (there may be less than 100 of them found at any particular minute), copy them to OLAP and remove them from OLTP.

    Such a background process won't make any noticeable effect on overall system performance.

    Down time would be brought to 0.

    _____________
    Code for TallyGenerator