http://www.sqlservercentral.com/blogs/jamesserra/2011/08/15/using-ssis-lookup-transformation-editor-with-millions-of-rows/

Printed 2014/08/01 04:21AM

Using SSIS lookup transformation editor with millions of rows

By James Serra, 2011/08/15

If you are using a SSIS lookup transformation editor and the reference dataset is large (millions of rows) and you are using “Full cache” on the lookup transformation (the default), you might run into a problem.  With this setting the rows will all be loaded into memory, and with so many rows you can run out of memory on the machine and the data flow will fail because the lookup transformation will not spool its memory overflow to disk.

There are two solutions to this.  One solution is you can use one of the other two cache modes: “Partial cache” or “No cache”.  These will prevent all or most of the rows from being loaded into memory, but the trade-off is that you will see a lot more hits to the database.  And if your reference dataset is tens of millions of rows, these could make the task take hours or even days. Read more about the different cache modes: SSIS – Lookup Cache Modes – Full, Partial, None and Lookup cache modes.

A better solution is to replace the lookup transformation editor and the corresponding OLE DB Command transformations and instead use the MERGE statement (requires SQL Server 2008) in an Execute SQL task.  Read about it at Using MERGE in Integration Services Packages.  Basically, you will create a Data Flow task that loads, transforms, and saves the source data to a staging table, create an Execute SQL task that contains the MERGE statement, and then connect the Data Flow task to the Execute SQL task, and use the data in the staging table as the input for the MERGE statement.

One SSIS package that I was working on was using “No cache” and the reference dataset was 5 million rows.  It was taking 6 hours to run.  After replacing it with the MERGE statement, it took only 15 minutes to complete.

One side note: “Full cache” mode adds additional startup time for your data flow, as all of the caching takes place before any rows are read from the data flow source(s).  So if you execute a package and it takes a long time before you see the first task start, it could be because many rows are being loaded into memory.


Copyright © 2002-2014 Simple Talk Publishing. All Rights Reserved. Privacy Policy. Terms of Use. Report Abuse.