Table backup

  • People: I need to backup a big table, because i will to add some columns. How can i make a backup of a single table?

  • You can not.

    But there are workarounds of course.

    1: Make a backup of the full database, restore it to a new database and just use the one table.

    2: bcp or DTS export to file. If necessary, you can use use the script wizard or query tool to get the the create statement.

    3: Use DTS export to export the table to a new empty database.

    Antares probably knows ten better methods.

    Joachim.

    Joachim.

  • Nothing wrong with any of those. Depending on your definition of big, I'll add my favorite hack, putting into an Access mdb, either via DTS or doing a 'make table' query from within Access.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • My 2 penny worth (or cents if u like). In cases like thses I create another table in the same database and copy the contents to it.

    e.g. create tableacopy and copy tablea to tableacopy.

    Far away is close at hand in the images of elsewhere.
    Anon.

  • Thanks to all of you!

    I'm so lazy... i really miss my 'backup table' option en 6.5 version...jajajaja

  • You could also place in a a seperate filegroup and backup that file group. Check out BOL Topics "How to back up files and filegroups (Transact-SQL)" and "How to restore files and filegroups (Transact-SQL)". If this is the only table you are working with it may help but since you are making changes you might still need to backup the primary filegroup as there will be system changes there. To go further move all tables but that one to another fielgroup besides Primary and backup Primary might work, again if this is the only table working with.

    Test it first thou as I have not tried it yet but based on all I read this should work.

    Edited by - antares686 on 12/01/2002 7:46:30 PM

  • Here is an inelegant but straight forward way we use of creating a quick backup of a table in the same database:

    select *

    into [backup_table_name]

    from [source_table_name]

    Might not be a good method for a really big table but we use it quite happily on tables > 1,000,000 records.

  • quote:


    Here is an inelegant but straight forward way we use of creating a quick backup of a table in the same database:

    select *

    into [backup_table_name]

    from [source_table_name]

    Might not be a good method for a really big table but we use it quite happily on tables > 1,000,000 records.


    Or take this same thing a step frther and create it in another database but I believe you are making changes to the table structure so this may not be the best way to do it. But doing like so might help.

    Create a backup database, do the select into the backup database, backup the backup database, drop the backup table (be safe, use a different name).

    Edited by - antares686 on 12/02/2002 04:04:51 AM

  • I use the "select * into" option that others have mentioned. But I also generate a script with all indexes and constraints. The "select * into" does not copy these objects.

    If you need to restore. Run the script and then insert records.

  • Unless I am misunderstanding, why create the backup at all? If you have a dump of the db, plan an off time for disaster recovery, in the event that the following fails: Recently, I added two not null columns to a table of more than 10 million rows using the alter table command. There is no need to issue an sp_rename, create the new table with the new columns and 'pipe' the data over to the new. There is no rebuilding of indexes necessary etc. Only binding of defaults are necessary and any updates to the table for the new columns.

    But, if you must complete a backup, I feel using DTS is overkill for something that is quite simple in script. As others have implied, you could take the route of renaming the table, creating the new via select into commands, defaulting the new col values as it goes. Then creating keys and indexes and binding any defaults as necessary.

  • Consider the impact on your transaction log if you copy to a table in the same db. It will grow with every record you add to the copy table. Copy to another db that you are not backing up to keep the transaction log size down.

  • rprice: check the BOL for "Bulk-Logged Recovery"

    SELECT INTO doesn't generate tlogs like an INSERT statement does.

  • For ease and flexibility (not the tlog), I'd do it in another db on the same server. Can back it up separately and if you need to restore in a hurry, you haven't added the overhead of restoring your backup table as well.

    Steve Jones

    sjones@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/sjones

  • There is an another possibility:

    You can move your big table into an another filegroup (and therefore a new file) by dropping and recreating the clustered index witn the ON FILEGROUP clause and then without any problem you can use the standard

    "backup database Yourdatabase FILEGROUP = YourNewFileGroup to YourBackupDevice"

    command.



    Bye
    Gabor

  • quote:


    People: I need to backup a big table, because i will to add some columns. How can i make a backup of a single table?


    To summarize this topic:

    1) use select into ... to copy the data to another table.

    2) use select into to copy the data to a table in another data base where logging is turned off or set to simple. this reduces the impact to the transaction log on the source data base.

    3) Create a new filegroup and place this table in it. then do a filegroup backup.

    4) Use BCP to unload the table.

    5) Use DTS to copy the table to another destination (probably overkill for the request.)

    6) do nothing and use your data base backup.

Viewing 15 posts - 1 through 15 (of 24 total)

You must be logged in to reply to this topic. Login to reply