May 12, 2012 at 11:59 pm
SSIS 2008.
I have a simple (in concept) package which takes data from 9 flat files and imports to 9 SQL Server tables in a database.
It does an INSERT/UPDATE thing, so uses lookups to direct unmatched rows as INSERTs and matched rows are directed into a staging table, for subsequent set-based UPDATE of the base table by running a stored proc. The lookups are not cached, given the large size of base data.
The routine has been running fine for a couple of weeks.
But yesterday, three of the data flow tasks decided to hang in the middle of processing.
Repeated runs of the process showed that (at least according to BIDS) the data flows always hang after processing exactly the same number of rows.
The underlying tables are fairly large (190 million rows, 10 million and 53 million).
I tried creating an empty table which was a copy of one of the tables and firing into that - this seems to work.
I ran a full database index rebuild in case that might have been an issue. No change (I let the routine run all night).
If anyone has any ideas about what might be going on or what I could try, I would be appreciative. I am really puzzled as to what might be the problem here.
Phil
May 16, 2012 at 2:10 am
Indeed a strange problem.
Uncached lookups are of course very slow, but I don't really see a reason why the package should hang.
Did you try a cache connection?
How is the server doing when the package is running?
p.s.: quite ironic your signature says you need to have a question mark 😀 😎
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
May 16, 2012 at 3:40 am
Koen Verbeeck (5/16/2012)
Indeed a strange problem.Uncached lookups are of course very slow, but I don't really see a reason why the package should hang.
Did you try a cache connection?
How is the server doing when the package is running?
p.s.: quite ironic your signature says you need to have a question mark 😀 😎
Oooh, the question mark thing came back to haunt me, damnit!! It was actually aimed at this sort of post:
"I'm trying to import from a 1964 Dogbase system and it isn't working." [end of post]
If you're anything like me, your brain will spontaneously issue a "SO WHAT?" response 🙂
Back to the main point of the post.
>> The target tables are so big that cached lookups were exploding RAM, so I had to change that and take the speed hit. The nature of the import data is such that a partial cache would be worthless.
>> The server seems to be out for a Sunday morning hangover jog. No CPU stress and medium disk activity.
In the end, I changed the architecture of the solution and so far it's working:
>> Import to truncated staging tables
>> Use MERGE to UPSERT to the target tables
>> Limit parallelism on the MERGEs
So no need to try to solve it - I'll just chalk it all down to experience.
Viewing 3 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply