Click here to monitor SSC
SQLServerCentral is supported by Redgate
Log in  ::  Register  ::  Not logged in
Home       Members    Calendar    Who's On

Add to briefcase ««12

Urgent: How to shrink this huge DB? Expand / Collapse
Posted Tuesday, October 1, 2013 3:20 PM



Group: General Forum Members
Last Login: Yesterday @ 9:47 PM
Points: 19,747, Visits: 18,025
halifaxdal (10/1/2013)
Scott Coleman (10/1/2013)
Here are two versions, column lists removed for clarity. Don't rely on "SELECT *" to work across servers.

-- V1: Copy the entire remote table, keep only what you need
INSERT INTO dbo.table
SELECT * FROM linksrv.db.dbo.table
WHERE id NOT IN (SELECT id FROM dbo.table)

This is what I used to do.

"SELECT id FROM dbo.table" will return 200k records from linked server and the local table will contain more than 200k once the local table gets populated, so I am afraid if the efficiency will become an issue in the future.

Yes at some point that will start to run slow. Compared to the common alternative of figuring out which records to keep and move across the wire, you will likely find that copying the entire table is faster. Since the two datasets are on separate servers.

An alternative would be to create an ssis package to migrate just the appropriate data and that can be done more efficiently in some cases.

But for the sake of figuring out which data to keep and "migrate" to the other server, it usually works best to have all of the data on the same server so the comparisons, joins, and evaluation can be done.

Jason AKA CirqueDeSQLeil
I have given a name to my pain...


Posting Performance Based Questions - Gail Shaw
Post #1500608
Posted Wednesday, October 2, 2013 12:21 AM



Group: General Forum Members
Last Login: Yesterday @ 8:08 PM
Points: 40,374, Visits: 37,575
halifaxdal (10/1/2013)
Jeff Moden (10/1/2013)
halifaxdal (10/1/2013)
I did the following to empty ldf: 1. Put the recovery mode to simple; 2. dbcc shrinkdatabase('database_name')

This made the ldf to minimum from 99GB down to 1M

I then tried to delete redundant records in one of the problematic table that has 25877065 records, apparently my command is wrong:
delete table where id > 20877065
the ldf file gets increased immediately and dramatically to GB level and the query is still running. I don't know if it would fail again due to running out of space. Should I continue waiting or should I terminate it?

And how to remove those redundant records?


Heh... you move too fast. Shrinking the database just fragmented every index you have to 99%. If you have Point-in-time backups running, you also just broke the log chain by putting the DB into the SIMPLE recovery mode. If you don't have Point-in-Time backups running, you should seriously consider it even if it's "just" a Dev database.

If you're doing deletes and you didn't take a backup just prior to doing the deletes, you may hate yourself in the morning when someone says you deleted to much or you deleted the wrong data.

Thanks for the advise, I am comfortable when I repaired the DB as it is not an important DB and used by only a handful users.

Heh... I've gotta disagree... if it's worth having, it's worth having right. All databases are important or they can be detached and dropped.

--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

Helpful Links:
How to post code problems
How to post performance problems
Post #1500659
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse