update query very slow sql server

  • Dear friends,

    my table have 100000000 records. i update the data with query like

    update ALL_MOBILE_DATA_PART

    set DistrictID=3

    where FULL_ADDRESS like '%KARIMNAGAR%'

    it takes time 30 min .how can improve my query fast.

  • Since you use the wildcard % at the beginning of the search term, you cannot index on that.

    This means SQL Server has to scan the entire 100 million row table to find all the rows that match the filter criteria.

    Maybe you can do the update in batches, so that the table isn't locked for 30 minutes in one giant transaction.

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • polo.csit (4/28/2014)


    Dear friends,

    my table have 100000000 records. i update the data with query like

    update ALL_MOBILE_DATA_PART

    set DistrictID=3

    where FULL_ADDRESS like '%KARIMNAGAR%'

    it takes time 30 min .how can improve my query fast.

    I agree with what Koen said, it's spot-on.

    Try something like this:

    SET ROWCOUNT 1000

    WHILE (1 = 1)

    BEGIN

    BEGIN TRANSACTION

    update ALL_MOBILE_DATA_PART

    set DistrictID=3

    where FULL_ADDRESS like '%KARIMNAGAR%'

    IF @@ROWCOUNT = 0

    BEGIN

    COMMIT TRANSACTION

    BREAK

    END

    COMMIT TRANSACTION

    END

    SET ROWCOUNT 0

  • SQL is delicious (4/28/2014)


    polo.csit (4/28/2014)


    Dear friends,

    my table have 100000000 records. i update the data with query like

    update ALL_MOBILE_DATA_PART

    set DistrictID=3

    where FULL_ADDRESS like '%KARIMNAGAR%'

    it takes time 30 min .how can improve my query fast.

    I agree with what Koen said, it's spot-on.

    Try something like this:

    SET ROWCOUNT 1000

    WHILE (1 = 1)

    BEGIN

    BEGIN TRANSACTION

    update ALL_MOBILE_DATA_PART

    set DistrictID=3

    where FULL_ADDRESS like '%KARIMNAGAR%'

    IF @@ROWCOUNT = 0

    BEGIN

    COMMIT TRANSACTION

    BREAK

    END

    COMMIT TRANSACTION

    END

    SET ROWCOUNT 0

    I would advise against that. Using rowcount like this is deprecated and will go away.

    http://technet.microsoft.com/en-us/library/ms188774.aspx

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

  • Sean Lange (4/28/2014)


    SQL is delicious (4/28/2014)


    polo.csit (4/28/2014)


    Dear friends,

    my table have 100000000 records. i update the data with query like

    update ALL_MOBILE_DATA_PART

    set DistrictID=3

    where FULL_ADDRESS like '%KARIMNAGAR%'

    it takes time 30 min .how can improve my query fast.

    I agree with what Koen said, it's spot-on.

    Try something like this:

    SET ROWCOUNT 1000

    WHILE (1 = 1)

    BEGIN

    BEGIN TRANSACTION

    update ALL_MOBILE_DATA_PART

    set DistrictID=3

    where FULL_ADDRESS like '%KARIMNAGAR%'

    IF @@ROWCOUNT = 0

    BEGIN

    COMMIT TRANSACTION

    BREAK

    END

    COMMIT TRANSACTION

    END

    SET ROWCOUNT 0

    I would advise against that. Using rowcount like this is deprecated and will go away.

    http://technet.microsoft.com/en-us/library/ms188774.aspx

    Hehe, oops. Thanks for the heads up! Guess he'd be better off using TOP(1000) instead, right?

  • SQL is delicious (4/28/2014)


    Sean Lange (4/28/2014)


    SQL is delicious (4/28/2014)


    polo.csit (4/28/2014)


    Dear friends,

    my table have 100000000 records. i update the data with query like

    update ALL_MOBILE_DATA_PART

    set DistrictID=3

    where FULL_ADDRESS like '%KARIMNAGAR%'

    it takes time 30 min .how can improve my query fast.

    I agree with what Koen said, it's spot-on.

    Try something like this:

    SET ROWCOUNT 1000

    WHILE (1 = 1)

    BEGIN

    BEGIN TRANSACTION

    update ALL_MOBILE_DATA_PART

    set DistrictID=3

    where FULL_ADDRESS like '%KARIMNAGAR%'

    IF @@ROWCOUNT = 0

    BEGIN

    COMMIT TRANSACTION

    BREAK

    END

    COMMIT TRANSACTION

    END

    SET ROWCOUNT 0

    I would advise against that. Using rowcount like this is deprecated and will go away.

    http://technet.microsoft.com/en-us/library/ms188774.aspx

    Hehe, oops. Thanks for the heads up! Guess he'd be better off using TOP(1000) instead, right?

    Yeah changing up your looping to use top 1000 would be better. 😉

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

  • Just a thought...

    The looping is going to cause a full table scan of a 100 Million row table for each and every loop. That's a shedload of I/O that can actually be avoided.

    Do a SELECT using the given criteria to find all the PKs and store them in a temp table. That'll take 1 scan. Then have your UPDATE work a loop for every 1000 items based on the PK. Things will go a lot faster.

    The only caveate is that you might want to have it check the rows for the search value to make sure the table data hasn't been changed. That's still going to be a ton faster because the searches will now be done by PK.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (4/28/2014)


    Just a thought...

    The looping is going to cause a full table scan of a 100 Million row table for each and every loop. That's a shedload of I/O that can actually be avoided.

    Do a SELECT using the given criteria to find all the PKs and store them in a temp table. That'll take 1 scan. Then have your UPDATE work a loop for every 1000 items based on the PK. Things will go a lot faster.

    The only caveate is that you might want to have it check the rows for the search value to make sure the table data hasn't been changed. That's still going to be a ton faster because the searches will now be done by PK.

    If I am reading the code right it is even worse than that Jeff. Seems to me that it is written as an infinite loop, meaning it will run an update of 1000 rows over and over and over, right? There is nothing that I see that will cause it to not update the same rows iteratively. I do admit to being pretty exhausted and frazzled right now though. :hehe:

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • TheSQLGuru (4/28/2014)


    Jeff Moden (4/28/2014)


    Just a thought...

    The looping is going to cause a full table scan of a 100 Million row table for each and every loop. That's a shedload of I/O that can actually be avoided.

    Do a SELECT using the given criteria to find all the PKs and store them in a temp table. That'll take 1 scan. Then have your UPDATE work a loop for every 1000 items based on the PK. Things will go a lot faster.

    The only caveate is that you might want to have it check the rows for the search value to make sure the table data hasn't been changed. That's still going to be a ton faster because the searches will now be done by PK.

    If I am reading the code right it is even worse than that Jeff. Seems to me that it is written as an infinite loop, meaning it will run an update of 1000 rows over and over and over, right? There is nothing that I see that will cause it to not update the same rows iteratively. I do admit to being pretty exhausted and frazzled right now though. :hehe:

    No you pretty much nailed it. At the very least would need add a Where DistrictID <> 3 or something along those lines.

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply