Restore onto another server - same c**p as before?

  • I have backup files and transaction log copy files on my production server.

    I want to load the database onto another server. So I shall copy the files across from one server to another and then do a restore on the new server.

    However, the production database is disorganised and has many (unintentional) heap tables - it's an old design :rolleyes: There are loads of pages which seem to be many miles from where they should be.

    Will I end up with the same rubbish on my new server as on the old? :crying:

    The database structures be the same on the new server so I can't get rid of the heaps. However, I was hoping to start with a relatively clean sheet with the rows where they should be and nothing left hanging around.

  • If you take a backup and restore it elsewhere, the restored database will be identical to the source database at the time it was backed up, fragmentation and all. The only thing that you won't have (maybe) is file system fragmentation

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Heaps have nothing to do with database structure. They mean you have a table without a clustered index. I assume you have indexes, make one a clustered index on each table. If possible, make an intelligent choice for something used in range queries.

    If you have things very fragmented, putting a clustered index up there will help as well.

  • my 2ct.

    "It doesn't matter" is to heavy, because you may need to restore more pages than with a nicely 100% organized db.

    However, you will need the original databases file sizes of space available on your new spot !

    After a restore, always perform db maintenance.

    - reindex

    - update the statistics

    - update usage data

    Keep in mind you may need to resync sqlserver accounts on the new instance.

    And off course, set up your new DRP.

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Thanks for that. Or no thanks, that wasn't what I wanted to hear!:-)

    I guess exporting from production then importing onto the new server would put the data back in optimum condition?

    I'm having to use USB drives because the LAN cannae handle it cap'n. She'll break up!:rolleyes:

    So I guess I could

    1) copy the backup files from production onto the USB drives

    2) restore them onto the new server

    3) export the data from the new server back onto the USB drives

    4) trash the data on the new server

    5) import the data back again

    The only trouble is that at USB 2.0 speeds copying 500GB of data it'll take about 8 hours to do each move.

    Unless you know better...:hehe:

  • Or you could restore the backup, add clustered indexes to the tables that are heaps and drop the clustered indexes straight after. That'll remove extent fragmentation.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • That is what is known as a very good idea.

    Thanks 😀

  • Gail has a good idea, but I'm still not sure why you're afraid of a clustered index.

  • By default, every table should have a clustering index !

    If you don't have one now, convert one of the (singl column/narror) indexes.

    If that CLIX doesn't serve you well, you pick another one !

    If you cannot fine a CLIX to support grouping operations, just pick a sequentially growing one.

    (identity/default getdate/...)

    I would avoid Uniqueidentifiers, as they tend to cause page fragmentation very fast, hence would need more maintenance. (If you still want to use a uniqueidentifier column, provision your index pages with freespace, ..)

    Also keep in mind, if your CLIX isn't a unique index, sqlserver will "uniquify" it for you, because all non-clix will address the unique key for the clix to point to the actual data page !

    And only if you cannot find a CLIX that serves you well or doesn't massively hurt your system, drop the clix !

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • The problem is more the impact analysis.

    Using the wrong index can seriously screw up responses and we're already in the doghouse with the customers.

    The choice of index for each table will need careful consideration, testing, etc. :doze:

  • Indeed, you still need to prepare these actions;

    In best cercumstances, you would collect performance data (trace), analyse it, and check your results with the dev.team.

    ... we're already in the doghouse with the customers...

    Chances are your customers are suffering the lack of a CLIX because of the less optimal organisation of your data pages !

    That's why I always suggest to start such operation with a convert of an existing index.

    If current customers complain, ask for budget to do the proper analysis and create the correct indexes.

    If this would be the first time such analysis is performed, you will gain without any doubt !

    (or it would be the first db ever that is delivered in an optimal configuration !) :blink:

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Err, I am on the dev team!

    Our SQLIS group are as much use as the proverbial chocolate teapot.

    No, scratch that, at least you can eat a chocolate teapot, and these guys...:w00t:

    Don't get me started, I'll get very upset and start shouting 😉

  • On an asside...

    My experience of this is that if you keep putting off the tasks that require a lot of testing and require some guts, then you will eventually run out of options.

    if you continaully make small fixes to a system, but avoid the large issues (like adding clustered indexes that may affect the order data is returned in) then there will come a day where there are no more "tactical" fixes you can apply and you end up with a system that doesn't respond and no way to make it work without a massive testing effort....

    take the plunge, look at the big issues first and get them underway... while they are bing tested you can then look at the smaller issues

    MVDBA

  • jwalker25, don't get upset, :-D, we've all been there.

    Certainly understand politics, just wanted to be sure this wasn't ignorance or a lack of understanding.

    If you were to replace a current NCI (non clustered index) with a CLI (clustered index), it is possible that you could see slightly worse performance, but not very likely. Instead, many of your query plans might improve as the optimizer expects a CLI.

    Almost every detailed conversation I have had with developers at Microsoft has them assuming a CLI exists on every table. If you have 20 rows it doesn't matter because everything is a scan, but otherwise you ought to be setting a CLI on each table.

    Do them sequentially, but honestly I think this has little downside and a lot of potential upside.

  • To assess the possible impact of getting rid of a heap I need to understand how a heap behaves.

    So what happens when you do inserts and deletes?

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply