Blog Post

Round Two of Insert Benchmark Tests

,

After seemingly exhausting my physical file layout and instance configuration options in Round One of testing, I decided to “throw some hardware” at the problem, and cheat by using SQL Server data compression. I increased the physical RAM from 6GB to 8GB (which is as high as I can go without springing for 4GB DIMMs), and I reran the best performing configuration from Round One.

Insert performance increased from 240 seconds elapsed time and 4166 rows/sec to 233 seconds and 4291 rows/second, which was not too impressive. I also tried changing the RecoveryInterval from the default value of zero to a value of five (for five minutes), with no noticeable change in performance.

Next, I implemented Page data compression on the clustered index of the BigTable. In this case, insert performance improved to 164 seconds and 6097 rows/second, with WRITELOG still the top wait type. SQL Server data compression is an Enterprise only feature that works very well in the right situation. It is ideal for data warehouse use, for example.

I have a two port, PCI non-RAID SATA controller that I will try next (to segregate the log traffic from the motherboard SATA controller), and I will see if I can pickup a cheap, four port PCI-E RAID SATA controller after that. After that, I am probably done with hardware and configuration changes, and will move to better ways to insert the data, beside this baseline bad way. Even so, we have gone from 835 rows/second to 6097 rows/second with some simple changes.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating