• Just a quick update - we were able to resolve the poor performance issue and I wanted to share the findings.

    As I mentioned, this was new hardware. It turns out that the read/write ratio on our storage controller cache was set to the default of 100%read and 0% write. Our old server was set to 25% read/75% write. When we updated the new server to match the setting of the old server, we saw significant improvement in our particular test involving DB write. However, the new server was still not performing as well as the old server (using our test program, it was only 2x slower instead of 8x slower now).

    It also turns out that the power profile in the BIOS was set to Balanced. This setting was changed to High Performance and we received another bump in performance.

    Estimated time in seconds per 1000 inserts:

    Old Server: 0.53

    New Server (original/default configuration): 4.33

    New Server (after changing read/write storage cache ratio): 0.96

    New Server (after changing power profile): 0.44

    A number of other things were tried after the read/write ratio was changed and none seemed to have significant impact on our particular test. Included in this were changing the number of spindles per RAID, changing read/write ratio from 75% write to 100% write, testing against a SOLID state drive (to rule disk I/O issues), various configurations of DB and transaction logs on same and different drives, and changing the hyper threading setting.