I wonder why the number of reads increases during an insert test

  • I am writing a performance baseline test.

    The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it helped me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)

    The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.

    In test 1 min/max server memory was set to 9215MB/10751MB

    In test 2 min/max server memory was set to 13311MB/14847MB

    The script assures the number of bytes inserted in the nvarchar columns is always the same.

    This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)

    Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9)

    In addition you can see at that moment

    - buffer cache hit ratio is decreasing

    - page life expectance becomes "terrible"

    - free list stall/sec increases

    - lazy writes/sec increases

    - readlatency increases (write latency does not)

    In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).

    Now what I do not understand is :

    Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.

    I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.

    Finally I want to notice

    - I'm the only user on this machine

    - the table has a clustered index on a identity column

    - there are no foreign key constraints

    - inserts are executed using a loop, not one big transaction

    - to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries

    So I wonder why SQL Server starts to execute so many reads in test 1. Any ideas?

    Thanks in advance,

    Kind Regards

    Peter.

  • A screenshot of the script to complete my post.

    Kind regards

    Peter

  • This is a noob's question as a response - so where are you defining the ReadsMdf? If you look at the adjacent left columns, you will notice a pattern - what's your formula for computing the ReadsMdf, and what are you looking at?

  • I have added the complete sql script here ...

    http://www.hoveseliga.be/images/perftest.txt

    so you can copy/paste it into SSMS.

    http://www.hoveseliga.be/images/perftest.txt

    It consists of several steps

    - CREATING TEST DATABASE

    - CORRECTING DATABASE SETTINGS

    - CORRECTING MIN/MAX MEMORY

    - CREATING TESTTABLE

    - CREATING TABLES, VIEWS AND STORED PROCEDURES TO LOG PERFMON COUNTERS DURING EXECUTION

    - DROPCLEANBUFFERS

    - THE INSERT TEST

    - VIEW TO GET THE RESULTS (AS DISPLAYED IN MY FIRST POST)

    In the script you see I use sys.dm_os_performance_counters to retrieve and store page life expectancy, etc. and I use sys.dm_io_virtual_file_stats to retrieve and store the number of bytes read/written and the number of reads/writes per database file.

    PS1:

    If you would like to execute the script, please verify the code first and execute it step by step ... since it might have an impact on your system and you might need to change some settings (location of the databasefiles, min/max server memory, etc.)

    PS2:

    I just executed the script on a SQL2012. SQL2012 shows the same "strange -until explained-" read behaviour as SQL2014 ... presuming my insert script and the stored procedures to capture the statistics doesn't contain logical flaws.

    PS3:

    I do not have a problem with the fact I see a lot of reads. It just like to know why SQL Server does it in this particular case.

    Kind regards

    PeteR.

  • I found the cause.

    Each 10.000 rows I execute select count(*) on table in which I insert the 5.000.000 rows.

    This causes the enormous number of reads and explains why the lead times increase dramatically once the number of MB inserted grows beyond the max memory setting.

    When I remove the select count(*) from the script. The leadtime of the complete script is about 5 times faster and the increasing number of reads flattens.

    Kind regards

    Peter.

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply