Inconsistent performance on SP call from C#

  • We have a console C# program using SqlClient. The code sets up a SP call and executes it and then closes the connection. One of the parameters is a .NET DataTable (SqlDbType.Structured) whose target is a User Defined Table Type (what else could it be!) .

    The problem is this. On the DEV, QA and UAT servers, we can deliver 100K rows in the DataTable, execute a merge within the SP from the Table Type object instance to a permanent table, and disconnect in 3 seconds or less (acceptable performance…for now).

    The same exercise on PROD can take anywhere from 5 to 25 minutes .

    PROD is a single instance of SQL server on a 2 node cluster. The DB reports 32GB of memory and 16 cores—DEV, QA and UAT (separate servers not on a cluster) have nowhere near the same resources .

    When we execute the C# code while pointing to PROD, CPU spikes and memory consumption rises sharply. If I manually “kill” the C# program, breaking the connection, the PROD quickly returns to its normal “under-utilized” state. The same if the program end normally.

    I am no DBA but suspect some kind of DB config problem. I suppose it could be network related but the CPU/memory spikes on the DB suggest that the SP has the data. Any suggestions?

    Other facts:

    1)Tempdb has 444.56 MB allocated and is using only a 100 MB.

    2)CPU is usually at 1 to 5%

    3)Very few users (at the moment)

    4)Trusted Connection using SqlClient

    5)On the production server using SSMS, I can run a script that loads 100K rows into an instance to the user defined table type, use the “filled” variable and call the SP. All executing in less than one second

    thanks,

    pf

  • Please read the second article I reference below in my signature block regarding asking for help on performance issues. You really haven't provided all the information we need to really provide you with good answers.

  • Hi Lynn...

    Thanks for responding. The problem is not with the query...its performance is acceptable on all servers but one. On the problem server, the time between calling the stored procedure and when the DML in the SP executes seems to be where the lag is.

    pf

  • And your servers are not identical. We need to see the actual execution plan of the query on the production server plus the table structures and indexes.

  • In fact, you may want to compare the actual execution plans between the various servers and see what is different.

  • Your data volume might not be the same. A query might perform ok on a table with some thousands of rows but won't be ok when volume increases to millions.

    That's another reason to give as much information as possible when asking for performance help.

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • There are many variables at play when it comes to dealing with performance issues. You mentioned the configuration of your production system: 32 GB RAM, 16 cores (how many physical processors, I assume 2). What is the rest of the configuration? SAN or DASD for storage, is it SCSI or fiber channel?

    What are the configurations for your DEV, TEST, UAT servers?

    Again, there are many things that can contribute to differences in performance between the different environments.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply