• jrestuccio (4/9/2013)


    Thank you SO MUCH for the replies, I am trying to take it all in.

    Let me describe a little bit about why the data is deleted so regularly and why it's important to not slow down insert speeds.

    Imagine 1000 searches being performed at one time, and as soon as the search is done, it collects the data and inserts the data into the TFX_v2_unfilteredrates table. The reason we need this to happen quickly is because we have a massive amount of searches to complete, and since the data needs to be FRESH it needs to be collected within 1 day.

    The reason for deleteing the data is merely to keep the size of the table down to a workable size. Since I believe our queries are poorly written........

    -Josh

    Assuming that your hardware is spec'ed appropriately for the workload, even 1,000 inserts of 5,000 rows each (5 million rows total) probably shouldn't take more than five or ten minutes, even with a couple or three indexes to update. This code inserts 5 million rows into a test table - it ran on my DEV server in about 30 seconds:

    create table dbo.insert_test (date_collected datetime, hotel_name varchar(20), hotel_id int, rate decimal(6,2))

    go

    set nocount on

    declare @i int = 1

    while @i<=1000

    begin

    insert into dbo.insert_test

    select dateadd(day, (@i*-1), getdate()), 'xxxxxxxxxxxxxxxxxxxx', t.N, 1.32 * t.N

    from .dbo.tally t

    where t.N <= 5000

    set @i = @i+1

    end

    If you're avoiding indexes to save a few minutes once a day when the table is loaded with new data, but the absence of indexes is costing you several minutes every time you run query, you're shooting yourself in the foot. Before you suffer through the poor performance of your query much longer, I'd investigate the impact of a proper indexing scheme on both the insert/delete process and the query performance to make sure you're striking an appropriate balance.

    Jason Wolfkill