Blog Post

The SQL Server Data Row Size Question: Why is It Bigger?


This morning, I received the following question from a user:

Hello Madam,

Could you please clarify SQLServer “Data Row” size:

If I run the script below on SQL Server 2012, then Slot(row) Size is 710 bytes

if I run the same script against SQL Server 2016 and above, then Slot(row) Size is 724 bytes.

They then provided a script which creates and inserts a few rows into a sample table, runs the DBCC IND command to find a list of pages for the sample table, then uses the DBCC PAGE command to examine the page.

The first thing I looked for: how many bytes is the difference? Is it 14 bytes?

I love it when someone sends me a repro script, but in this case I didn’t need to run it. The first thing I did was to look at the two numbers given for row size, and to subtract the smaller one from the larger one: 724 – 710 = 14 bytes of difference.

That bit of information alone gave me an immediate guess of what was going on.

Row versioning in SQL Server has a 14-byte row overhead

You can reduce blocking in SQL Server by enabling “optimistic” locking. There are trade-offs for this approach, however: row versions are enabled in tempdb, and 14 bytes of overhead may be required on impacted rows. (For a quick overview of this row versioning process, check out this Simple Talk post by Kalen Delaney.)

My guess is that row versioning is enabled only on the SQL Server 2016 and above instances that are being tested in this case. This could be because of any of the following

  • Snapshot isolation is allowed on the database
  • Read committed snapshot isolation is enabled on the database
  • The database has a readable secondary in an availability group

You might wonder – why would having a readable secondary cause the row to grow?

Readable secondaries in SQL Server allow you to offload read workloads.

It’s important to minimize blocking against those readable secondaries, though: you don’t want data modifications flowing through to the readable secondary to be blocked by reporting queries that you are running against the secondary, after all. You also want reads to be as consistent as possible.

For this reason, queries that are run against readable secondaries are automatically escalated to snapshot isolation. And snapshot isolation requires row versioning — those 14 bytes of space are needed too help make it all work.

But readable secondaries are read-only, right? So how can it it write those 14 bytes? The answer is that it’s complicated. Here’s the documentation from in the “capacity planning” concerns in Microsoft’s docs:

When you configure read-access for one or more secondary replicas, the primary databases add 14 bytes of overhead on deleted, modified, or inserted data rows to store pointers to row versions on the secondary databases for disk-based tables. This 14-byte overhead is carried over to the secondary databases. As the 14-byte overhead is added to data rows, page splits might occur.

The row version data is not generated by the primary databases. Instead, the secondary databases generate the row versions. However, row versioning increases data storage in both the primary and secondary databases.

Offload read-only workload to secondary replica of an Always On availability group

To dig into the details of this more, click the link — there’s even more info on the full page.

Original post (opens in new tab)
View comments in original post (opens in new tab)


You rated this post out of 5. Change rating




You rated this post out of 5. Change rating