I have an SSIS package which is copying 5 large tables daily. There is no WHERE logic in these data pulls, every record is selected and the source tables will only keep growing.
Currently, we are now at the point where the SSIS package causes the Transaction Log on the target SQL Server to run out of space.
I noted that the OLEDB Destination connections for the package currently have the default Rows per Batch, and Max Insert Commit Size set (Data Access is Fast Load). My understanding is that this means that SSIS will attempt to load all rows in a single transaction which has now exceeded the max available log space.
I've read that if I change the Max Insert Commit Size to a smaller number, SQL Server will commit each transaction after that many rows which "...may help with transaction log and temp db size"
My questions: If I change this, will the log still keep growing unless I take a log backup during the data load so that the already committed transactions of smaller batches are cleared from the log?
Also, do I need to match the rows per batch to the max insert commit size?
Basically I need to load this data but reduce the trans log usage so it doesn't grow past it's imposed hard cap. Any advice appreciated.
I can't put the target DB into Simple Recovery as the target DB is participating in a HA group.
- This topic was modified 19 hours, 24 minutes ago by planetmatt.