Anatomy of an Incremental Load

  • Comments posted to this topic are about the item Anatomy of an Incremental Load

    Andy Leonard, Chief Data Engineer, Enterprise Data & Analytics

  • Nice in-depth article.

    Just curious - do you have any metrics as to how efficient this is? DTS has gotten a black eye in the past as to being less performant than more streamlined tools like bcp or bulk insert, and SSIS seems to be getting hit with it just by being its replacement. The process you're describing seems to be very efficient process-wise; I just don't have anything that big to load up these days to get an idea of how well this behaves?

    Never mind for now the ease of setting this up - I can see how much faster it would be to set one of these up than the "manual way"...

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Thanks Matt!

    I don't use DTS (or competing products) enough these days to speak about how SSIS performs when compared to them. I can tell you:

    1. In general SSIS is way faster than DTS. In tests I did with the Beta versions of SSIS I saw a typical 20% - 40% performance boost. When coupled with an upgrade from SQL Server 200 to 2005, I saw those numbers increase even more. SSIS gives you more and different options for piping data through the enterprise so it is possible to achieve much better performance.

    2. Staging updates makes a measurable impact in incremental loads compared to using the OLE DB Command.

    :{> Andy

    Andy Leonard, Chief Data Engineer, Enterprise Data & Analytics

  • Cool! I will have to see if I can run the next "big one" I get this way to get a feel, but it sounds like performance has definitely been improved.

    Thanks!

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Hi Andy,

    one case is missing in your article: deleted rows (rows in the database but no longer in the input data).

    And just another question: do you really think that comparing the whole data between two tables is faster then truncating a table and reloading it complete (if you have a time windows for doing this, of course)?

    Until now, I have such a time window but after reading your article I think about loading data by your way, because it is the best way to have all data 24x7 online.

    Thanks a lot.

    Rainer kroos

  • just a wee thought on this article. Surely what you are talking us through here is the same as is accomplished by the Slowly changing dimension 'box' . Which automagically creates all the tsql to do the updates,additions and deletions. With support for keeping a history of changes.

    It's quite a mad one but a bit of reading up on slowly changing dimensions really helps! 🙂

  • Good introduction to SSIS! One thing you wrote surprised me: that using a select statement would be faster than using a view. I don't understand why this is the case.

    Since you promised that this would be the first of a collection of articles on SSIS, I would like you to address a situation in which either the input or the output is not SQL Server. For my own purposes, I would like a demonstration of how to handle output to another database such as DB2 or MySQL.

    Thanks for a nice article.

    Arthur

    Arthur Fuller
    cell: 647-710-1314

    Only two businesses refer to their clients as users: drug-dealing and software development.
    -- Arthur Fuller

  • Great comments!

    Rainer: Deleted rows can be correlated with a Right Join instead of a Left Join, and Filtered the same as New Rows. For the Delete action I stage the rows (much like the Updates) and run a set-based query after the Data Flow.

    I do believe loading incrementally is faster than than truncating and reloading in any database that is scaling, or is of sufficient size and experiencing a certain threshold of changes (this is my way of quoting Andy Warren: "It Depends" ;)).

    Surely what you are talking us through here is the same as is accomplished by the Slowly changing dimension 'box' .

    Yussuf: Yep - it's a lot of the same stuff accomplished with the SCD transformation (optimized some). Unless you're familiar with the concepts of ETL for Kimball-based databases, the SCD Wizard can be a bit intimidating. You essentially work through the same thought process you would work through here, but it looks different to the first-time developer. I would say the converse is more accurate: "the SCD is a very cool wrapper for incremental loads."

    ...using a select statement would be faster than using a view.

    Arthur: Gosh I hope I didn't say that. If I did, forgive me. I argue against selecting a Table or View by name from the dropdown in the same way I argue against writing T-SQL with "Select *..." at the beginning. Extra lookups take place for column names and the like when you execute "Select *". The same applies in SSIS which is, after all, executing SQL against the database.

    :{> Andy

    Andy Leonard, Chief Data Engineer, Enterprise Data & Analytics

  • yussuf.khan (2/11/2008)


    just a wee thought on this article. Surely what you are talking us through here is the same as is accomplished by the Slowly changing dimension 'box' . Which automagically creates all the tsql to do the updates,additions and deletions. With support for keeping a history of changes.

    It's quite a mad one but a bit of reading up on slowly changing dimensions really helps! 🙂

    In my experience the SCD is not very efficient once you get above a thousand rows or so. As the name implies, the SCD is targeted at tables that do not change often, and dimension tables are typically on the small side. It works great and I've used it, but in the right situations. Andy's approach probably scales a lot better.

    Nice article Andy!

  • This may relate a littel to Arthur's comment about loading to another RDBMS or from another RDBMS or even another server. If you are loading from a system that you cannot join between the source and destination tables would you use a lookup, then the conditional split, or stage all the data in SQL Server and then load into the destination using the method you show here?

  • Hi Andy,

    I appreciate your quick responses. Perhaps I misread the following sentence, but I don't think so:

    "Your source may be a table or view, it may be a query. For those of us who like to squeeze every cycle possible out of an application, using a query for relational database sources will provide better performance than specifying a table or view name."

    Arthur

    Arthur Fuller
    cell: 647-710-1314

    Only two businesses refer to their clients as users: drug-dealing and software development.
    -- Arthur Fuller

  • Arthur -

    Picking the name of the table or view from the drop down is equivalent to select * from MyTableOrView. In the same way that that's bad form in T-SQL - it's not a great idea if you intend to get every ounce of performance out of SSIS.

    Andy -

    like I mentioned the last time (when I apparently got a sneak preview of the article:) ), very nice, in-depth article. Just curious about one thing: during the correlate part when setting up the look-up, I would have naturally gone towards NOT setting the Lookup task to ignore the error, and to treat the "lookup failures" as the new records (meaning - set it up to "redirect the rows").

    Is there any major pros and cons about setting it up this way or the other (I know that there are a million ways to set up various things in here). For example - in a case like that - is it better to have a conditional split that handles ALL conditions, or two conditional splits, each on a smaller set of the data?

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Hello,

    As already mentioned by others, I also doubt that this is a method that works for importing data from a different server.

    The problems with data coming from a different server start with bad access paths.

    I am still looking for this "ultimate" technology that will allow me to efficiently join data from two different servers without doing mostly full table scans in the remote database. Currently I am only achieving this through dynamic SQL. Please give me a hint if you know of a better method:-)

    Thanks for sharing your knowledge!

    Best Regards,

    Chris Büttner

  • Great comments. I wrote during lunch instead of responding - apologies. I will respond to the questions this evening or tomorrow.

    One thing I noticed in my browser, and maybe this is only my browser doing this, I cannot see the Changed Rows detection condition expression. Here it is:

    ( (NameStyle != Dest_NameStyle) || (Title != Dest_Title) || (FirstName != Dest_FirstName) || (MiddleName != Dest_MiddleName) || (LastName != Dest_LastName) || (Suffix != Dest_Suffix) || (EmailAddress != Dest_EmailAddress) || (EmailPromotion != Dest_EmailPromotion) || (Phone != Dest_Phone) || (ModifiedDate != Dest_ModifiedDate) ) || ( IsNull(Title) || IsNull(MiddleName) || IsNull(Suffix) )

    Back to work...

    :{> Andy

    Andy Leonard, Chief Data Engineer, Enterprise Data & Analytics

  • Great article Andy!

    I found a unique way to reduce the number of rows I had to check, when I've got a SQL Server database as my source. I just add a "timestamp" column to each source table. Of course, this isn't really a timestamp its a varbinary that's database row version. Then I check for the minimum active row version, and pull only the rows that have been updated or inserted since my last incremental load.

    This helps us reduce the number of rows we have to compare down to 40K or so to see if they were inserted or update. This is a great time saver when you would have to compare 100s of millions rows we have in our source database.

    Thanks again - Linda

Viewing 15 posts - 1 through 15 (of 101 total)

You must be logged in to reply to this topic. Login to reply