Experts, I am wondering if I can get some advise. I will try to explain the situation as best as I can. We are given some tasks on a daily basis from end users where they give us some files. Those files contain ID, firstName, LastName, title, address, company they work for, company address, phone number etc. Source files can contain records anywhere between 20000 to 100,000. Once we have those files, we store those values in a DB table. After that, we run a match process against those values with our main table and create 4 staging tables. 1st staging table for a tight match , 2nd one is for loose match (For individual name, address only) . 3rd table for a tight mach and 4th table for a loose match for company + address. After that, we create 1 table which contains all the data including some additional columns stating whether that given record was a tight match or a loose match and how many records from the source exist in our main table. After that, we send the file back to end users.
What I am trying to do here is I want to make this process fast. For example, 1 SSIS package with 1 file with 20,000 records takes about 20 - 30 minutes. And if I am processing 100,000 records, than it takes 2 -3 hours. It takes that long because we use Melisa Data component for contact verification which takes some time. So I just need some suggestion on how I can improve this process? Can I create multiple temp tables but how does it work in this situation. I also thought about processing those records in batches but it won't work either because the process is going to be write to a same table which is not possible. Any suggestion would be helpful.