To Index, or Not to Index

  • I have a huge table (80 mil) records where I need to append an extra URL column. The query is kind of like:

    Update a set URL = b.URL

    from a

    inner join b

    on a.ID = b.ID

    where a.URL is not null

    I'm doing batch updates right now, but the progress is very slow. I'm debating if I should index on a.URL, which is a varchar(500) field and the table already has massive indexes which cannot be dropped because they are used by production.

    What's your take on this, to index, or not to index on the URL column? Thanks!

  • If there is no index on the URL column on the table you are pulling the data from then you will probably end up with a row by row table scan. Very slow.

    You could create a temporary index and drop it when the updates are complete. That is assuming this is a one time update. If it will be run several times, it may not hurt to leave the index in place.

    Take a look at the execution plan and it will tell you what is taking the most time and give some recommendations as to how to speed things up.

  • Without knowing the exact DDL, I'm guessing that adding yet another index won't help the update speed. I think the update is slow because you are getting page splits while updating the long varchar column.

    As an aside, can you control the behaviour of varchar(max) so it is stored only in LOB pages?

  • Thanks for the input. It just occurred to me that a better option might be to dump all the data into a temp table which is a replica of table a but without any indexes. I actually used to have to refresh the data in table a on a quarterly basis. I was using a staging table like this, and it didn't take that much time to dump 80-90 million rows and recreate the indexes. I was not doing it this time because I wasn't really thinking. I thought apending data to an extra column would be very straight forward as it appeared to be. But I forgot every update means to delete the old row, reinsert the new row, and on top of that, recreating all the indexes. So I set up an SSIS job to do this dump and it's looking good. Over 3 mil rows have been transferred in 20 minutes to the staging table vs 4 mil records in past 24 hours using the old way through updating. Problem solved! 🙂

    While I'm here, I wonder if anybody can enlight me on another puzzle. While I was doing the updates using the following loop, I noticed the tempdb was growing wildly. I thought every loop would wipe the slate clean and shouldn't affect the tempdb size. But looks like I was wrong. Any explanation?

    ------------------

    While exists (select top 1 * from a where URL is null)

    Begin

    set rowcount 1000

    Update a set URL = b.URL

    from a

    inner join b

    on a.ID = b.ID

    where a.URL is not null

    waitfor delay '00:00:02'

    End

  • "SET ROWCOUNT" is a global setting. Not to be used unless you want to limit every single query executed.

    Use TOP:

    UPDATE TOP (10000) ...

  • If a large percentage of rows have a.URL is NULL, then might it be beneficial to add a sparse index to that the update would only touch pages are rows that are known to contain values in URL?

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply