Parallel Processing of Large Volume ETL Jobs

  • Comments posted to this topic are about the item Parallel Processing of Large Volume ETL Jobs

  • The title of this article is "Parallel Processing of Large Volume ETL Jobs"... what do you consider to be a "Large Volume ETL Job"?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Large volume means number of rows and data size of source(s) that need to be processed into the target system by applying good amount of complex validation/filtration/transformation logics. The "Number of rows" and "good amount of" in the previous sentence depends on the hardware capacity and availability of ETL time window of individual systems. Before considering the idea discussed in article, study need to be done on the project for facts like current/future hardware capacity, how much data parallelism can be applied with current hardware, current/future available time window for etl processing, future complexities in validation logics, etc.

  • I think the information you have attempted to communicate is valuable. However, the repeated use of slashes fragmented or run-on sentences made the article almost impossible to read.

  • Ohh, and you teased me. You mentioned SQL Set Theory twice in you introduction and then never addressed it in the body. Now I have to go google SQL Set Theory.

  • This is great timing we were just discussing this in regards to using HaDoop or SQL Server to do some very large, close to terabyte, imports and transformations.

    I have a couple questions, and I realize you didn't want to include code but a little start on it would help if possible:

    ?How do you physically spit up the large files and track them in each thread?

    ?How is the master process receiving it's messages from the children processes?

    ?In a SQL SERVER implementation would you most likely use CRL code to break up the files and Bulk Load to import it?

    ?Any suggestions on foreign keys and indexes?

    This is a great topic, it may give us a lead on how to proceed with our new data project.

    Very much appreciated!

    Skål - jh

  • RAGHAVENDRA NARAYANA (11/8/2007)


    Large volume means number of rows and data size of source(s) that need to be processed into the target system by applying good amount of complex validation/filtration/transformation logics. The "Number of rows" and "good amount of" in the previous sentence depends on the hardware capacity and availability of ETL time window of individual systems. Before considering the idea discussed in article, study need to be done on the project for facts like current/future hardware capacity, how much data parallelism can be applied with current hardware, current/future available time window for etl processing, future complexities in validation logics, etc.

    C'mon... How many rows and columns per row are you talking about?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • We can think of an ETL system where we expect around 5 million transaction coming in with around 25 columns.

    I would request if any of the Technical Architects who worked on database sizing and hardware configuration to share their thoughts on the article including parallelism, processors plus this question please.

  • Thanks for the rowcount and line size... I've got an experiment or two to do... I'll be back...

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • ?How do you physically spit up the large files and track them in each thread?

    ?How is the master process receiving it's messages from the children processes?

    FOR ABOVE TWO QUESTIONS, IN ARTICLE I HAVE GIVEN DETAILED EXPLANATION.

    ?In a SQL SERVER implementation would you most likely use CRL code to break up the files and Bulk Load to import it?

    NOT SURE ABOUT THIS. I WOULD REQUEST EXPERTS IN THIS PARTICULAR AREA TO ADDRESS THIS.

    ?Any suggestions on foreign keys and indexes?

    NOTHING MUCH THAT I CAN THINK OF NOW, FEW THINGS COULD BE, KEEP MAIN TABLE AND RELATED TABLE INSERTS/UPDATES IN ONE COMMIT/ROLLBACK BLOCK. TAKE CARE OF IDENTITY COLUMNS IN CASE OF ROLLBACK/FAILURE, WE'LL LOSE THE IDENTITY NUMBER IF ROLLBACK/FAILURE HAPPENS. INSERT/UPDATE THE RECORDS IN BULK LIKE THOUSANDS IN CHUNKS IN PREFERABLY TARGET TABLE PRIMARY INDEX ORDER. IN TARGET TABLES, BE STRICT IN INCLUDING INDEXES, TRY COMPOSITE INDEXES INSTEAD OF MULTIPLE INDIVIDUAL INDEXES.

  • Hi Raghavendra,

    Great Article.

    How do you actually spawn a new thread? I have to implement something like this and this has given me some valuable information. I would like to thank you for this but before I begin I would like to perform some testing and I can't seem to find any code that relates to spawning new threads.

    Thanks once again and I hope I am not causing you any unneccasary hassle.

    M

  • Yes i would like to know this too!

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply