SSIS-C#script task CPU utilization

  • We have recently upgraded our production SQL server from 2012 to 2016 Enterprise edition. We have 12 SSIS packages scheduled as SQL agent jobs from this server. These packages were copied as it is from 2012 server to 2016. Earlier the number was just 3 and now it is 12 which is going to increase further. 
    We are observing a spike in CPU every time the package comes to execution. Although the spike is only momentarily, since these packages run in every 3 minutes, the system shows up the high utilization almost all the time. 

    I could see it from the resource monitor that every time an SSIS job comes to execution as per its schedule, a new process of DTexec.exe is created. One such process takes around 6% of CPU. I see that for every SQL agent job a new such process is created. And in my case with 12 packages coming to execution at the same time, I see 12 DTexec.exe processes in task manager. And the CPU it consumes together is 12*6=72%.
    To see if it is the issue with package, I disabled all those 12 jobs and executed another package from agent, with only one simple script task that just assigns a value to a variable. I see the same behavior there as well.
    I did another test too. Scheduled another package that does not have script task, but just one exec sql task. I didn't find dtexec.exe coming in queue anytime for this package and hence there were no spikes in CPU.
    Looks like it is a behavior very specific to to script task. Can you please help me to understand this?

    Thank you for your time!!

  • ROOPESHVP - Tuesday, December 11, 2018 4:25 AM

    We have recently upgraded our production SQL server from 2012 to 2016 Enterprise edition. We have 12 SSIS packages scheduled as SQL agent jobs from this server. These packages were copied as it is from 2012 server to 2016. Earlier the number was just 3 and now it is 12 which is going to increase further. 
    We are observing a spike in CPU every time the package comes to execution. Although the spike is only momentarily, since these packages run in every 3 minutes, the system shows up the high utilization almost all the time. 

    I could see it from the resource monitor that every time an SSIS job comes to execution as per its schedule, a new process of DTexec.exe is created. One such process takes around 6% of CPU. I see that for every SQL agent job a new such process is created. And in my case with 12 packages coming to execution at the same time, I see 12 DTexec.exe processes in task manager. And the CPU it consumes together is 12*6=72%.
    To see if it is the issue with package, I disabled all those 12 jobs and executed another package from agent, with only one simple script task that just assigns a value to a variable. I see the same behavior there as well.
    I did another test too. Scheduled another package that does not have script task, but just one exec sql task. I didn't find dtexec.exe coming in queue anytime for this package and hence there were no spikes in CPU.
    Looks like it is a behavior very specific to to script task. Can you please help me to understand this?

    Thank you for your time!!

    dtexec is the executable which runs SSIS packages ... regardless of whether there is a script task inside them. I would imagine that extra CPU resources are required to load the environment necessary to execute any C# code in script tasks & would therefore suggest that this is all standard behaviour.

    But having 12 packages running concurrently, every three minutes, does sound like a potential problem.


  • Thank you for your response.
    These packages constantly poll folders for any report files and if found they pick it and send them to recipients. These files are reports requests submitted by users from an online interface and we cant afford them to wait for a longer period to receive reports. Hence these packages run in every 3 minutes.
    Recipients data flows to the script task as a result set from an SP executed in the first step. We need 12 different packages as they do this for 12 different application and queries 12 different databases on the same server.

  • ROOPESHVP - Tuesday, December 11, 2018 5:29 AM

    Thank you for your response.
    These packages constantly poll folders for any report files and if found they pick it and send them to recipients. These files are reports requests submitted by users from an online interface and we cant afford them to wait for a longer period to receive reports. Hence these packages run in every 3 minutes.
    Recipients data flows to the script task as a result set from an SP executed in the first step. We need 12 different packages as they do this for 12 different application and queries 12 different databases on the same server.

    This sounds like it would be better suited to a C# service.
    But an SSIS solution could be made to work, I am sure.
    Instead of running packages every three minutes, have you considered using a package containing a File Watcher? Such a package can run on a continuous schedule (every minute), but will only complete when a file is found.
    If your folder structure is hierarchical, like this

    Folder 1
      Folder 1A
      Folder 1B

    The file watcher can be assigned to monitor Folder 1 and its subfolders.
    There is also no (technical) reason why a single package cannot access any or all of your 12 databases.


  • Thanks Phil.
    Tried a couple of other things but with no luck.
    Scheduled just one SQL agent job instead of many. This single agent job has multiple steps to call same SSIS package with different configuration files. The result was same. CPU spiked whenever the package came to execution.
    Second thing I tried is to use an execute package task to call other packages from single agent job. Same result again.

    We are now trying to schedule it from a different server using DTexec.exec and calling it from a command line script scheduled in Windows task scheduler. This way we will at least relieve the production DB server from CPU spikes.

    I also agree that a C# service would be more appropriate in this case, however it requires an engineering effort from the team and might not be the solution that we can have in place immediately. Thank you for your suggestion.

Viewing 5 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply