Insert vs Update

  • How does an INSERT differ from an UPDATE in the way it gets processed in storage and memory or CPU?

    Asking the question because our updates have deteriorated by about 40% in speed. We have an ETL that ran overnight for 10 hours and now it is taking over 14 hours. All updates are taking about 40% longer. Any suggestions? Appreciate all the information.

  • UPDATE needs to look through the table to find the row(s) that needs updating.  INSERT can just put a new row in the proper place (heap or index) without needing to look as hard as to where it should go.

    The larger the table is, the worse the UPDATE can be without proper indexes.  I would look at building some indexes to help the updates.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

    1. Tables get truncated
    2. Constraints get dropped
    3. Bulk imports
    4. Table updates
    5. Index and constraints get created

    Day in day out. Somehow 2 weeks ago the process is 30 to 40 percent longer.

    Could it be the disks? Its a high speed SSD SAN.

  • Do you have execution plans before & after the slowdown that you can compare ?

    Is the amount of data increasing ?

    I doubt it's hardware related, but check the server event logs for problems.

    • This reply was modified 3 years, 7 months ago by  homebrew01.
    • This reply was modified 3 years, 7 months ago by  homebrew01.
  • It could be the disks.  It could be the data.  It could be the CPU or memory.  It could be blocking.  It could be none of those.  It could be a combination of those.

    At my workplace, we had a set of SSIS packages that ran nightly.  Normally, this took about 2 hours to complete, but then one day it ran for over 8.  After some digging, we determined that SSIS had paged to disk because we ran out of memory and that was the cause of our performance hit.  Adjust some things around and get more free memory on the machine and we were back at our 2 hour window.

    It could be disks if the SAN is doing something to your disks at the same time you are trying to run things.  For example, if it is doing some disk consistency checks at the same time, that will slow down all disk activity.

    Now it could be the data.  If the data got substantially larger or crossed some threshold in size or quantity, it may have caused a different execution plan to come up for pulling the data out.  Alternately, if the data grew substantially, it could be the data is just larger and larger data will take longer to process.

    Could be something got scheduled on the server or installed on the server so you have less resources (CPU and Memory) to work with for your ETL.

    Could be your network is congested between your servers.

    There are a lot of things it could be.  If it was a sudden change, my best guess is that it is related to data changing and causing the execution plan to change resulting in a slower plan OR resource pressure.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Without seeing the actual execution plans of the parts that are taking the longest, we're all just "guess'n and suggest'n".  You need to figure out what the "high points" are and post the ACTUAL execution plans to be able to figure this out.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • jcarranza 23978 wrote:

    1. Tables get truncated
    2. Constraints get dropped
    3. Bulk imports
    4. Table updates
    5. Index and constraints get created

     

    Day in day out. Somehow 2 weeks ago the process is 30 to 40 percent longer.

    Could it be the disks? Its a high speed SSD SAN.

    based on the order you have above updates are likely to grow slower as data grows specially if your updates use joins to other tables and/or filtering of records - without indexes you are talking table scans - potentially a lot of them.

    you most likely have to either recreate all indexes before updates or add some of them before the updates - but without a valid explain plan its very hard to help further.

  • Thank you everyone.

    Unfortunately I do not have execution plans for previous ETL runs before the timing started to change. I only have logs of how long the times took. For example.

    2020-08-29 06:37:50 update stats: 369 table updates in 226.93 mins; average per table : 36.90 secs

    2020-08-29 06:37:50 update stats: 68 parallel updates in 118.74 mins; average per column: 26.39 secs; total cols: 270

    2020-08-29 06:37:50 update stats: 40 sequential updates in 143.27 mins; average per column: 82.65 secs; total cols:

    --------------------------------------------------------------------------------------------------------------------------------------

    2020-09-12 10:30:13 update stats: 369 table updates in 328.80 mins; average per table : 53.46 secs

    2020-09-12 10:30:13 update stats: 68 parallel updates in 155.07 mins; average per column: 34.46 secs; total cols: 270

    2020-09-12 10:30:13 update stats: 40 sequential updates in 219.03 mins; average per column: 126.36 secs; total cols: 104

    For Example a particular Table went from 25 mins to 32 mins...

    we have hundreds of tables and if yo uadd up the minutes here and there it takes over 3 hours more to complete. It is just not this table but just about every table.

    update dbo.tbl_People....

    -------------------------------------------

    2020-09-08 00:27:46 330388 (31.0 mins)

    2020-09-06 16:16:53 330371 (41.5 mins)

    2020-09-05 00:16:21 330277 (28.8 mins)

    2020-09-03 00:21:25 329971 (31.3 mins)

    2020-09-02 00:31:38 329851 (29.9 mins)

    2020-09-01 00:31:05 329678 (29.5 mins)

    2020-08-30 17:14:48 329390 (32.2 mins)

    ------------------------------------------

    2020-08-28 23:28:10 329270 (25.0 mins)

    2020-08-27 23:33:41 329156 (24.8 mins)

    2020-08-26 23:35:00 328979 (24.7 mins)

    2020-08-12 23:46:35 327228 (25.4 mins)

    2020-08-07 00:14:53 326524 (24.9 mins)

    2020-08-05 23:51:40 326367 (26.0 mins)

  • Are you updating the tables sequentially or are some being done at the same time?

    That being said, how many rows are being updated in tbl_people?  If on 2020-08-07 it updated 100 rows but on 2020-09-08 it updated 10000 rows, it will take longer.  You will need more metrics than just duration to troubleshoot this.

    Think about it like driving to the store from your house - if I told you it took 30 minutes to drive there on Monday but took 40 minutes to drive there on Tuesday, your first thought is going to be that traffic must be worse on Tuesday.  But it could be I was speeding on Monday, or road work and thus I needed to take a detour on Tuesday making the drive longer.  Or could be I drove on Monday and took my bike on Tuesday.  Or many things.  Without all the information, we are just guessing.

    The other thing to note is the time of the data runs. It looks like when the update runs at 16:16:53 or 17:14:48 it takes longer than the average midnight runs.  Might not hurt to look at older runs.  To me, this looks like just a data growth issue with tbl_person anyways, with the exception being when the task was not run at midnight which my guess is that it was resource contention or blocking.  BUT again, these are just guesses based on the numbers you posted.

     

    EDIT - disreguard my first question... you answered that one.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • one question that may help this.

    why so many updates? and how are they being done - SQL please, not just explaining.

    and potentially why aren't some being done as part of the load itself.

    also. what are the number of records - number of tables is not as important but more the volume of data - 300 tables of 10k records isn't the same as 300 tables of 10 million records

  • Since the tables are being truncated and then repopulated by inserts, they'll be pretty well packed at the page level of nearly 100% page fullness before they get hit with updates.  If the updates are occurring on any variable width columns and they are "ExpAnsive" updates, there's going to be a ton of row forwarding on the original heaps (it's the HEAP version of page splits) along with a shedload of logfile usage... all of which takes a lot a lot of time.

    What caused the sudden increase in time?  Perhaps you've simply reached a scalability limit (tipping point) with memory v.s. data or maybe even the OS Swap File has come into play or the amount of logging has caused unexpected growth of the log files (especially if they're set up for a % of growth instead of fixed rate of growth) or even the data files (especially if you don't have instant file initialization enabled) for the same reason.

    Or it could be any of the other things that have been mentioned.

    We just don't know because we don't have anything to go on except some measurements that demonstrate that things have slowed down.  We continue to guess because we don't have anything else to go on.  We need more information and we need some actual execution plans as a starting point.  It would be nice to have Before'n'After copies of the plans but we at least need the plans for the worst parts of the updates as they currently exist.  Some log file info in the form of how many forwarded rows or page splits there are would also be helpful.

    I also agree with Frederico... why so many bloody updates?  This seems like a "poor man's data warehouse" gone bad that might even be solved with some simple restores but, again, not enough info.  We don't even know how many rows and table sizes were talking about.  We're fishing in an empty swimming pool.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks everyone for the valuable information.

    So as you all can see on this subset of the ETL - the times are taking longer. Some are parallel updates and some are sequential. However, all of them have increased their times. not much difference in data to account for the additional time. For example TABLE17  - 2,046,083 (13.6 mins) - New time 2,049,517 (25.0 mins). Difference of only 3,434 records but an increase of 84% in time to complete. Also, I see that bulk inserts, index creation, sequential and parallel updates are all taking longer. I would like to blame the SAN but my SAN Team keeps insisting that there is nothing wrong.

    PREVIOUS ETL TIMES (Approxinately 1 Hour)

    2020-08-23 17:20:39 parallel:

    2020-08-23 17:20:39 update dbo.table7 4169596 (10.2 mins)

    2020-08-23 17:30:53 sequential:

    2020-08-23 17:30:53 update dbo.table8 4169596 (4.3 mins)

    2020-08-23 17:37:37 parallel:

    2020-08-23 17:37:37 update dbo.table11 14031227 (2.3 mins)

    2020-08-23 17:39:53 sequential:

    2020-08-23 17:39:53 update dbo.table12 14031227 (5.1 mins)

    2020-08-23 17:45:04 parallel:

    2020-08-23 17:45:04 update dbo.table16 2046083 (10.9 mins)

    2020-08-23 17:56:01 sequential:

    2020-08-23 17:56:01 update dbo.table17 2046083 (13.6 mins)

    2020-08-23 18:09:35 update table18[0] 4167722 (1.9 mins)

    2020-08-23 18:11:27 update table18[1] 4167405 (0.4 mins)

    2020-08-23 18:11:51 update table18[2] 269020 (0.0 mins)

    2020-08-23 18:11:52 update table18[3] 4167722 (0.2 mins)

    2020-08-23 18:12:06 update table18[4] 130010 (0.0 mins)


    NEW TIMES (about 2 hours )

    2020-09-13 18:54:58 parallel

    2020-09-13 18:54:58 update dbo.table7 4180864 (19.7 mins)

    2020-09-13 19:14:38 sequential:

    2020-09-13 19:14:38 update dbo.table8 4180864 (9.6 mins)

    2020-09-13 19:28:28 parallel:

    2020-09-13 19:28:28 update dbo.table11 14071223 (3.6 mins)

    2020-09-13 19:32:06 sequential:

    2020-09-13 19:32:06 update dbo.table12 14071223 (13.2 mins)

    2020-09-13 19:45:33 parallel:

    2020-09-13 19:45:33 update dbo.table16 2049517 (23.0 mins)

    2020-09-13 20:08:34 sequential:

    2020-09-13 20:08:34 update dbo.table17 2049517 (25.0 mins)

    2020-09-13 20:33:31 update dbo.table18[0] 4218696 (2.4 mins)

    2020-09-13 20:35:58 update dbo.table18[1] 4218378 (0.6 mins)

    2020-09-13 20:36:36 update dbo.table18[2] 55601 (0.0 mins)

    2020-09-13 20:36:37 update dbo.table18[3] 4218696 (0.4 mins)

    2020-09-13 20:37:00 update dbo.table18[4] 131418 (0.0 mins)


    Even BulkInserts take longer. All of these times just keep adding up to where in the end it just adds up to over 3 hours.

    2020-08-23 13:16:07 bulk insert dbo.tbl_St1 6184366 (0.3 mins)

    2020-08-23 13:16:25 bulk insert dbo.tbl_St2 4170284 (1.5 mins)

    2020-08-23 13:17:52 bulk insert dbo.tbl_St3 2597361 (0.1 mins)

    2020-08-23 13:18:00 bulk insert dbo.tbl_St4 3202485 (0.2 mins)

    2020-08-23 13:18:13 bulk insert dbo.tbl_St5 14031300 (6.0 mins)

    -------------------------------------------------------------------------------------------------------------------------------

    2020-09-13 13:08:14 bulk insert dbo.tbl_St1 6187532 (0.4 mins)

    2020-09-13 13:08:38 bulk insert dbo.tbl_St2 4181554 (1.8 mins)

    2020-09-13 13:10:27 bulk insert dbo.tbl_St3 2600752 (0.2 mins)

    2020-09-13 13:10:38 bulk insert dbo.tbl_St4 3211750 (0.3 mins)

    2020-09-13 13:10:56 bulk insert dbo.tbl_St5 14071296 (8.2 mins)

    Even INDEX Creation takes longer

    2020-08-23 18:35:55 create index1 (2.5 mins)

    2020-08-23 18:38:25 create index2 (4.3 mins)

    2020-08-23 18:42:43 create index3 (1.8 mins)

    2020-08-23 18:44:33 create index4 (2.5 mins)

    2020-08-23 18:47:01 create index5 (2.2 mins)

    2020-08-23 18:49:15 create index6 (4.2 mins)

    2020-08-23 18:53:30 create index7 (3.5 mins)

    ---------------------------------------------

    2020-09-13 21:04:34 create index1 (3.8 mins)

    2020-09-13 21:08:22 create index2 (5.1 mins)

    2020-09-13 21:13:25 create index3 (3.3 mins)

    2020-09-13 21:16:41 create index4 (3.9 mins)

    2020-09-13 21:20:32 create index5 (3.5 mins)

    2020-09-13 21:24:00 create index6 (5.7 mins)

    2020-09-13 21:29:45 create index7 (5.1 mins)

    Vendor says their code has not changed. SAN Teams says everything is ok on their end. I am capturing Windows Performance metrics and see if I can find anything that would lead me to a root cause but it's going to take time. Thanks for all ideas. I will continue and if I find the root cause I will post it here.

  • "Before'n'After copies of the plans but we at least need the plans for the worst parts of the updates as they currently exist".

    I might be bale to get the after the problem arose execution plans.

    Some log file info in the form of how many forwarded rows or page splits there are would also be helpful.

    I'll try to get that info

     This seems like a "poor man's data warehouse" gone bad

    Vendor application. We are trying to work with them but they keep pointing to the SAN as the problem and the SAN teams says everything is well on their end.

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply