Click here to monitor SSC
SQLServerCentral is supported by Redgate
 
Log in  ::  Register  ::  Not logged in
 
 
 


Update query - need suggestion


Update query - need suggestion

Author
Message
Baskar B.V
Baskar B.V
SSC-Enthusiastic
SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)

Group: General Forum Members
Points: 175 Visits: 341
I am planning to do an update on a very large table (table1 ~250million). So need to know which one of the below query would be better & faster? Appreciate your reply. Both the fields from table1 are not indexed.

UPDATE a
SET a.field1 = b.field1
FROM table1 a (NOLOCK)
INNER JOIN table2 b (NOLOCK) ON a.id = b.id AND a.status = -1

(OR)

UPDATE a
SET a.field1 = b.field1
FROM table1 a (NOLOCK)
INNER JOIN table2 b (NOLOCK) ON a.id = b.id
WHERE a.status = -1

BASKAR BV
http://geekswithblogs.net/baskibv/Default.aspx
In life, as in football, you won’t go far unless you know where the goalposts are.

Gianluca Sartori
Gianluca Sartori
SSCertifiable
SSCertifiable (6.7K reputation)SSCertifiable (6.7K reputation)SSCertifiable (6.7K reputation)SSCertifiable (6.7K reputation)SSCertifiable (6.7K reputation)SSCertifiable (6.7K reputation)SSCertifiable (6.7K reputation)SSCertifiable (6.7K reputation)

Group: General Forum Members
Points: 6730 Visits: 13323
The two UPDATEs produce the exact same exec plan.
Just a few points:

1) NOLOCK has no effect on the table to update
2) NOLOCK can easily generate inconsistent data, get rid of it
3) Updating such a large table will make you log file explode. Do it in batches and back up the transaction log between batches, something like this


DECLARE @rows int
DECLARE @batchsize int
SET @rows = 1
SET @batchsize = 10000 -- whatever size you find appropriate

WHILE @rows > 0
BEGIN
UPDATE TOP(@batchsize) a
SET a.field1 = b.field1
FROM table1 a
INNER JOIN table2 b
ON a.id = b.id
WHERE a.status = -1

SET @rows = @@ROWCOUNT

BACKUP LOG ...
END



Obviously you would have to mark the rows already processed by the batch in order to avoid updating the same records over and over.

--Gianluca Sartori

How to post T-SQL questions
spaghettidba.com
@spaghettidba
Lowell
Lowell
SSCoach
SSCoach (18K reputation)SSCoach (18K reputation)SSCoach (18K reputation)SSCoach (18K reputation)SSCoach (18K reputation)SSCoach (18K reputation)SSCoach (18K reputation)SSCoach (18K reputation)

Group: General Forum Members
Points: 18321 Visits: 39425
I think by just adding to the WHERE statement to Gianluca's fine example, you'd make sure you don't change fields where they already match, you'll prevent re-processing of the same rows...you might need to take nulls into account as well:

DECLARE @rows int
DECLARE @batchsize int
SET @rows = 1
SET @batchsize = 10000 -- whatever size you find appropriate

WHILE @rows > 0
BEGIN
UPDATE TOP(@batchsize) a
SET a.field1 = b.field1
FROM table1 a
INNER JOIN table2 b
ON a.id = b.id
WHERE a.status = -1
AND a.field1 != b.field1
--AND ISNULL(a.field1,'') != ISNULL(b.field1,'') --is there nulls?

SET @rows = @@ROWCOUNT

BACKUP LOG ...
END



Lowell

--
help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

Baskar B.V
Baskar B.V
SSC-Enthusiastic
SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)

Group: General Forum Members
Points: 175 Visits: 341
Thanks.

Eventhough table1 is very large, table2 would have less number of records around 3000 rows so i hope batch update is not required.

Also i am planning to add "date" condition to where clause from table1 to filter the number of records from table1 eventhough logically ID from table2 does not come below the date condition. Will that where clause help? "date" field has an non-clustered index in the table.

BASKAR BV
http://geekswithblogs.net/baskibv/Default.aspx
In life, as in football, you won’t go far unless you know where the goalposts are.

Gianluca Sartori
Gianluca Sartori
SSCertifiable
SSCertifiable (6.7K reputation)SSCertifiable (6.7K reputation)SSCertifiable (6.7K reputation)SSCertifiable (6.7K reputation)SSCertifiable (6.7K reputation)SSCertifiable (6.7K reputation)SSCertifiable (6.7K reputation)SSCertifiable (6.7K reputation)

Group: General Forum Members
Points: 6730 Visits: 13323
It depends. Compare the execution plans with and without date condition, you'll find out for sure.
If unsure, you could also try the performance of the two, changing UPDATE to SELECT.

If you could post tables and indexes script or attach the exec plan you would get better advice.

--Gianluca Sartori

How to post T-SQL questions
spaghettidba.com
@spaghettidba
Lowell
Lowell
SSCoach
SSCoach (18K reputation)SSCoach (18K reputation)SSCoach (18K reputation)SSCoach (18K reputation)SSCoach (18K reputation)SSCoach (18K reputation)SSCoach (18K reputation)SSCoach (18K reputation)

Group: General Forum Members
Points: 18321 Visits: 39425
it might help...the devil is in the details.

the pseudo code example we have so far doesn't give us enough to work with...if the big table has an index on the datetime column, the WHERE statement might use an index to find the records, so it would be faster and lock less records (potentially) during the update. but depending on the number of records being updated, the optimizer might just decide it's easier for it to lock the whole table during the update; can't say for sure.

Baskar B.V (4/9/2010)
Thanks.

Eventhough table1 is very large, table2 would have less number of records around 3000 rows so i hope batch update is not required.

Also i am planning to add "date" condition to where clause from table1 to filter the number of records from table1 eventhough logically ID from table2 does not come below the date condition. Will that where clause help? "date" field has an non-clustered index in the table.




Lowell

--
help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

Baskar B.V
Baskar B.V
SSC-Enthusiastic
SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)

Group: General Forum Members
Points: 175 Visits: 341
This is the modified update statement based on your feedback...

UPDATE a
SET a.field1 = CASE WHEN b.field1 IS NOT NULL
THEN b.field1
ELSE a.field1
END
FROM table1 a
INNER JOIN table2 b ON a.id = b.id AND a.status = -1
where a.datekey > '20100101'

-- table is in primary filegroup and datekey clustered index under primary filegroup. After adding the where condition the clustered index scan on table1 become clustered index seek on table1 also reducing the cost from 98% to 3%. But the cost is now split to other components like hash join (34%).. earlier hash join is 1%.

But table2 clustered index seek is good since it is going to join with that big table.

Provide me if you can make some more suggestions.

volume of the tables:
table1 - 250million
table2 - 4000 records

BASKAR BV
http://geekswithblogs.net/baskibv/Default.aspx
In life, as in football, you won’t go far unless you know where the goalposts are.

Baskar B.V
Baskar B.V
SSC-Enthusiastic
SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)SSC-Enthusiastic (175 reputation)

Group: General Forum Members
Points: 175 Visits: 341
Forgot to mention in the previous post.

The total records which would get updated would be around 10 million records.

BASKAR BV
http://geekswithblogs.net/baskibv/Default.aspx
In life, as in football, you won’t go far unless you know where the goalposts are.

K Cline
K Cline
Old Hand
Old Hand (371 reputation)Old Hand (371 reputation)Old Hand (371 reputation)Old Hand (371 reputation)Old Hand (371 reputation)Old Hand (371 reputation)Old Hand (371 reputation)Old Hand (371 reputation)

Group: General Forum Members
Points: 371 Visits: 206
How many of your records contain Nulls? Also, you may want to follow a suggestion that was put forth earlier and only update rows that are different. If there are a significant amount of Nulls or matches then you may see some improvement.

UPDATE a
SET a.field1 = b.field1
FROM table1 a
INNER JOIN table2 b ON a.id = b.id AND a.status = -1
where (a.datekey > '20100101') And (a.field1 != b.field1) And (b.field1 IS NOT NULL)
Go


John Rowan
John Rowan
SSCarpal Tunnel
SSCarpal Tunnel (4.4K reputation)SSCarpal Tunnel (4.4K reputation)SSCarpal Tunnel (4.4K reputation)SSCarpal Tunnel (4.4K reputation)SSCarpal Tunnel (4.4K reputation)SSCarpal Tunnel (4.4K reputation)SSCarpal Tunnel (4.4K reputation)SSCarpal Tunnel (4.4K reputation)

Group: General Forum Members
Points: 4430 Visits: 4530
As K Cline pointed out, put the update field comparison in the WHERE clause instead of the in a CASE statement in the UPDATE. This allows the optimizer to potentially lessen the # of rows it needs to update. The CASE statement method will not help performance.

In regards to the cost values in your execution plan, optimizing a portion of a query will most times result in the costs being split out to other tasks. This is not always a bad thing. A query has to have 100% cost somewhere so just because one task is higher than some does not always mean it is not efficient. I like to base efficiency decisions more off of logical reads than the cost value in the query plan. Improving logical reads is a more accurate measure of how much a query has improved.

John Rowan

======================================================
======================================================
Forum Etiquette: How to post data/code on a forum to get the best help - by Jeff Moden
Go


Permissions

You can't post new topics.
You can't post topic replies.
You can't post new polls.
You can't post replies to polls.
You can't edit your own topics.
You can't delete your own topics.
You can't edit other topics.
You can't delete other topics.
You can't edit your own posts.
You can't edit other posts.
You can't delete your own posts.
You can't delete other posts.
You can't post events.
You can't edit your own events.
You can't edit other events.
You can't delete your own events.
You can't delete other events.
You can't send private messages.
You can't send emails.
You can read topics.
You can't vote in polls.
You can't upload attachments.
You can download attachments.
You can't post HTML code.
You can't edit HTML code.
You can't post IFCode.
You can't post JavaScript.
You can post emoticons.
You can't post or upload images.

Select a forum

































































































































































SQLServerCentral


Search