The Many Levels Of Concurrency

“Concurrency is hard”

Construction begins on the Tower of Babel, 610 BCE (colorized)

When designing for high concurrency, most people look to the hardware for answers. And while it’s true that it plays an important role, there’s a heck of a lot more to it.

Start Optimistic

Like, every other major database vendor is optimistic by default. Even the free ones. SQL Server is rather lonesome in that, out of the box, you get the Read Committed isolation level.

Everyone else gives you something close to Read Committed Snapshot Isolation. This is why you don’t see too many Oracle bloggers wasting precious internet on the perils of NOLOCK.

Pros

  • Avoid reader/writer blocking from the start
  • Makes reporting on OLTP data easier
  • Can alleviate some deadlocking scenarios

Cons

  • Can drive tempdb crazy
  • If your transactions depend on short blocks for queuing, you may need to use locking hints
    Coin Currency Is Hard
  • Those orphaned “begin tran” queries have more repurcussions
Start Normal

Perhaps the un-sexiest bit of databases: normalization! This plays a huge part in concurrency, though.

If you choose to store all of your data in overly wide tables, a lot more queries rely on them simultaneously. If you’ve followed my advice on optimistic locking, you’ve bought yourself a little reprieve, but all those modification queries still have to get along.

Wider tables are also a lot more difficult to index. The number and variety of queries that touch wide tables in different ways often lead to many wide nonclustered indexes to compensate.

When you spread data out to well-normalized tables, the easier it becomes to spread out writes, and index for specific query patterns.

I can’t recommend Louis Davidson’s book on database design enough.

Start Setting

I know it’s been said everywhere on the internet, and anyone heavily invested in high-end design and implementation will have a runbook for SQL Server setups.

But still, this stuff matters! Let’s take a look at a couple things Microsoft has made better, and a couple things they haven’t.

Good

Having these two things set up correctly from the get-go can have huge impacts down the line. Especially with trace flags 1117 and 1118 being the default tempdb behavior starting with SQL Server 2016. Back when we talked about optimistic isolation levels, we talked a little about tempdb. All the row versions that allow readers and writers to not get all gummed up go to tempdb. You’re gonna want this going blazing fast.

Having instant file initialization turned on is a no brainer (unless you’re covered in tin foil for non-fetish-related reasons). End users typically don’t like having pauses in query processing while waiting for files to grow. Letting SQL and Windows negotiate this with IOUs behind the scenes makes data file growths a whole heck of a lot less painful — especially if you’re the kind of person who lets their files grow by percentages. You know, 10% of 4 GB is a whole lot less than 10% of 400 GB.

Still bad

  • MAXDOP
  • Cost Threshold for Parallelism

I know these are a bit personal, especially if you’re on SharePoint (forgive me for speaking such words), but here’s why they impact concurrency.

You only get a certain number of threads to work with, and they’re based on your CPU count.

512 + ((logical CPU’s – 4) * 8)

You’ve got to factor a lot into how many threads you need

  • Number of Users
  • Number of parallel queries
  • Number of serial queries
  • Number of databases and data synchronization (think Mirroring, AGs, backups for Log Shipping)

If you leave MAXDOP and CTFP at the defaults, you’re letting just about every query use every core.

You’re looking at running out of threads pretty quickly, if you’ve got any kind of user base. Nothing kicks concurrency to the curb like THREADPOOL waits.

This is SQL Server we’re talking about, and all those cores are doggone expensive. If a query wants to use more than one, it better be worth it.

Those defaults have been around since Justin Timberlake was country the first time.

Start Hard

As we speak, I’m kicking 128 GB of RAM. I don’t even have a real job.

If you’re sizing production servers, or Tituss Burgess forbid, VM hosts, and you think 256 GB of RAM is a good number, you have lost your mind to oblivion.

SQL Server 2016 SP1 Standard Edition can use 128 GB of RAM to cache data alone. If your database isn’t that big today, it will be soon. And you know you’re working on a data warehousing project, too.

On top of that, it can use

  • 32 GB for column store
  • 32 GB for Hekaton

And it can use any memory in your server above the 128 GB mark for query processing, DMV and plan caching, backups, and just about anything else.

Choices, choices..

Thanks to Bob Ward (who through all things are possible, except learning how to use the debugger) for confirming my suspicions about the memory grant part.

Stepping back a bit to design — you better choose those data types carefully if you want to make the most out of your memory.

Running out of memory means waiting on memory. Much like THREADPOOL, this is not what you want your queries waiting on.

Having data cached in memory and having enough memory for queries to run without spilling oodles of noodles to disk is important.

Babel of Bloggers

Concurrency isn’t the domain of any one aspect of the database. You have to put a lot of thought into it at every level of design.

There are other ways to shoot yourself in the foot, whether it’s instance stacking, putting all your clients in one database, or putting hundreds of client databases on one server.

These are the kinds of design decisions people seemed determine to make.

While they’re cheaper and easier in the beginning them, they’re often expensive and difficult to undo later.

Like these butt implants I got.

Thanks for reading!

Previous Post
[Video] Office Hours 2018/2/14 (With Transcriptions)
Next Post
How to Throttle Logins to SQL Server

7 Comments. Leave new

  • Hi Erik,

    Great post as usual. Good to see the Vyvanse is doing it’s thing. I’ve been learning more about concurrency lately and I was surprised to see that SQL Server may not do the old “writers don’t block readers” thing out of the box. Turns out it was locking the whole page because of the low data volume I was working with, so my next goal is to learn more about the threshold it sets to decide if it should lock the whole page, or just the row/s in question. Anywho, I have a few questions/concerns/criticisms about your squat form:

    1. What causes RC isolation to drive tempdb crazy?
    2. I agree that wide tables are a PITA to index. We have several I’m trying to curtail into something more reasonable. When normalizing wide tables down to multiple narrow(er) tables, like 3NF, could you possibly be trading one concurrency issue for another, as you may need to lock keys/rows/pages on multiple tables, what for the ACIDs and all?
    3. Pics of ATG squats with butt implants or it didn’t happen.

    Thanks!!

    Reply
    • Hey Brandon,

      Vyvanse is just one of what I call the “Three Vs” that have really turned my life around.

      To give you some quick answers:

      1. Row versioning for optimistic isolation levels happens in tempdb. Large and long transactions can take up a lot of space, and the space needed increases with more indexes involved in the transaction.
      2. Sure, maybe, but well placed and indexed foreign keys can minimize the impact
      3. If you contribute to my patreon you can watch all my workout vids and tell people we’re squat plug bros.

      Cheers from Old Albion!
      Erik

      Reply
  • Ah yes, the “3 Vs To Success;” Vyvanse, Valium, and Vianabol, I know them well. Not srs. Ok semi srs. Ok srs.

    1. Looks like I was mistaken about what you meant by “row versioning for optimistic isolation levels,” thinking (stock, standard, without setting allow RC snapshot DB option on) read committed fell under this umbrella. After doing some reading, it appears this now only applies to the snapshot transaction level, and/or the “allow read committed snapshot” DB option set to on. I’m also dinkering in some of the DMVs that show the versioning footprint in tempdb. Interesting stuff that. I’m currently working on selling the team on temporal tables, and from everything I’ve found, they appear to store versioning info solely in the history table while a transaction is in-flight. I didn’t find any versioning footprint in sys.dm_tran_version_store. If that is the case, it would seem as though we can gain the benefits of snapshot isolation without adding any extra crap to tempdb, which takes quite the pounding as it is.
    2. Sadly, this may not be the case here, as even if we were to split the wide table(s), they would still be “one thing” as far as the application (read: PMs) are concerned, and they may want to update attributes that would then span multiple tables. I haven’t gone too far down that rabbit hole as of yet. One day perhaps.
    3. Donating imaginary internet money now.

    Thank you again for your sage wisdom.

    Your squat plug bro,
    -B

    Reply
  • Do I understand correctly that “wide” table guidance would be the reverse for a clustered columnstore, where my intuition tell me you would want to have as many columns as possible as everything is divided into segments anyway?

    Reply
    • Hi Bartosz,

      If you’re talking about a DW, where the data is relatively stable and you’re working with large chunks of it at a time, then a wide table with a clustered columnstore index would be a great solution.

      In my case, the tables I’m trying to split up are extremely volatile OLTP tables where the application is only working with a single row at a time. If the tables were to be split up, they would still be one “thing” as far as the application and PMs are concerned, so updating multiple attributes on the “thing” might lead to more locks being taken out. But, it may also result in less locks, depending on just what attribute/s are updated. So, in summary, as usual, “it depends.”

      Reply
  • alen teplitsky
    March 6, 2018 8:34 am

    One thing i’ve done a few times is recreate a clustered index in a way totally opposite of best practices. But it worked and reduced locking and blocking. It doesn’t work all the time and depends on your table and data. But in some cases finding a non-unique column for a clustered index can do wonders and reduce the need for other indexes saving you disk space and IO

    Reply
  • Learn about the impact of locking and blocking in sessions through some demonstrations of different types of actions. Learn a bit about concurrency problems in SQL Server, the issues they create, and the different isolation levels that help you solve them.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.