Anjan Wahwar (4/15/2014)
I have the requirement to load source txt file and perform incremental update on my destination table. As I was not allowed to create any table to hold staging records (from file), I'm storing file data to a global temp table and then performing MERGE operation (temp & Dest table).
In the data low task i have a derived column that handls date column (if blank or null then pass db_null to dest db else pass the column as it it).
Now everything is working fine. The only problem is with performance. For transferring 1 million records through the derived column task my package is taking almost an hour.
Any help on where I'm loosing my performance??
Are you able to work out the split between how long it takes to populate the temp table and how long the merge takes?
The derived column task is unlikely to be the problem. In my experience, they generally work fast and do not cause blocking.
Does the temp table have any indexes or primary keys? Dropping these before the import and then recreating them before the merge might speed things up.
If the source data does not contain duplicates, you could consider using a lookup with full cache on the target table and sending the 'not matched' rows directly into the target table. Then the MERGE at the end has much less to do - just the updates.
Do all of the rows selected contain updates or inserts, or can you potentially filter the source?
Help us to help you. For better, quicker and more-focused answers to your questions, consider following the advice in this
If the answer to your question can be found with a brief Google search, please perform the search yourself, rather than expecting one of the SSC members to do it for you.
Please surround any code or links you post with the appropriate IFCode formatting tags. It helps readability a lot.