We have a console C# program using SqlClient. The code sets up a SP call and executes it and then closes the connection. One of the parameters is a .NET DataTable (SqlDbType.Structured) whose target is a User Defined Table Type (what else could it be!) .
The problem is this. On the DEV, QA and UAT servers, we can deliver 100K rows in the DataTable, execute a merge within the SP from the Table Type object instance to a permanent table, and disconnect in 3 seconds or less (acceptable performance…for now).
The same exercise on PROD can take anywhere from 5 to 25 minutes .
PROD is a single instance of SQL server on a 2 node cluster. The DB reports 32GB of memory and 16 cores—DEV, QA and UAT (separate servers not on a cluster) have nowhere near the same resources .
When we execute the C# code while pointing to PROD, CPU spikes and memory consumption rises sharply. If I manually “kill” the C# program, breaking the connection, the PROD quickly returns to its normal “under-utilized” state. The same if the program end normally.
I am no DBA but suspect some kind of DB config problem. I suppose it could be network related but the CPU/memory spikes on the DB suggest that the SP has the data. Any suggestions?
1) Tempdb has 444.56 MB allocated and is using only a 100 MB.
2) CPU is usually at 1 to 5%
3) Very few users (at the moment)
4) Trusted Connection using SqlClient
5) On the production server using SSMS, I can run a script that loads 100K rows into an instance to the user defined table type, use the “filled” variable and call the SP. All executing in less than one second