Blog Post

PASS Summit 2013 Day 2 Key Note

,

KILT DAY!

Today we have to eat our vegetables and then get lots and lots of sweet desert.

Or.

Today we hear about PASS Finances as a part of the official annual meeting and then we get to hear Dr. David Dewitt speak (completely and utterly getting our nerd on and squeeing like teenage girls at a Bieber concert).

I will be live-blogging this event, so watch this space.

8:20: Douglas McDowell kicks off the key note today.  the vast majority of the money that runs PASS comes from the Summit. That’s right, by attending the Summit you’re also supporting the organization. The Business Analytics Conference, which kicked off this year also provides quite a bit more money to the organization.

8:25: PASS has changed its budgeting process. At this point, there is about 1 million dollars (American) in the bank. That means they’ve got a cushion should an event go south. That’s very important.

The amount of money spent on the community last year was $7.6 million. 30% of that is focused specifically for international audiences (which begs the question, how much money comes FROM the international audiences). The money is spent on Summit, BA Conferences, Chapters, SQL Saturday, 24 Hours of PASS and 520 web sites (woof).

8:31: Bill Graziano, PASS President, takes the stage to say goodbye to PASS Board members leaving the board. Douglas McDowell, who was just talking, is leaving the board after six years and being a volunteer since 2001. Rob Farley is also leaving the board. Rushabh Mehta comes on stage after eight years on the board. He’s the Immediate Past President, a role that automatically rolls off the board after a couple of years.

Next up, Thomas LaRock, the currrent vice-president of Marketing and the incoming PASS President. We had about 3000 unique viewers online at the live PASS TV (which I launched this morning, talking about KILT DAY!). The new board positions are Adam Jorgensen, Executive Vice President, Denise Mcinerney Vice President Marketing. Jen Stirrup, Tim Ford and Amy Lewis are coming onto the board.

In 1999, the Summit started. That’s 14 years. I’ve made 9 of them in a row.

8:38: PASS Summit 2014 will be in November 4-7 in Seattle next year. The PASS BA Conference will be in San Jose, CA May 7-9 in 2014.

Remember there are tons of networking opportunities.

8:41: What, Why, How Hekaton with Dr. David DeWitt

Let’s get our nerd on.

Dr. DeWitt is one of the things that makes the Summit.

WHAT:

Hekaton is memory optimized but durable, very high performance OLTP engine, fully integrated into SQL Server 2014, Architected for modern CPUs. It really is a wicked cool technology. I still don’t by the concept that you don’t need new hardware for this, but that’s not questioning the utility of the functionality.

WHY:

OLTP performance has started to plateau with current technology. The increases in CPU just aren’t going fast enough any more, so they have to find something to figure out how to improve performance. The goal for Hekaton was a 100x improvement. They didn’t make that, but they got between 10x and 30x improvement, which is pretty amazing.

You can’t just pin all tables in performance. Latches for shared data structures are going to hurt. they hit locks for control mechanisms and the execution plans generated won’t be improved.

The implications of a buffer pool are that you get storage over time.

You’ll need to track down the slides to understand some of what I’m saying in this live blogging. It won’t make sense without them.

I found it here

So a query needs a page. It checks for the page. The query gets blocked until the page gets allocated and then it continues from there. But, another query can be blocked by the process coming in. So, they added latches to the pages in the buffer pool. He shows how the latches allow multiple queries to find objects in the pool, but mark them as being used. But this ultimately runs into performance because the shared data structures need latches and they consume time to maintain.

8:55:

You also have to have concurrency control, in short, locking and blocking (you know, the stuff that NOLOCK “fixes”). Jim Gray, mentor to Dr. DeWitt, came up with two phase locking. So a query gets the lock type from the manager and then when a query releases locks, they can be reused. This basically sets up the idea of serial locking to get things done correctly.

When the database lives on disk, the processing time to get a query and create a plan, can be trivial (not always), but if the data is in memory, that becomes way to expensive.

All this is the reason you can’t pin stuff in memory.

Shared data structures have latches. Concurrency control uses two-phase locking. Query plans are through interpretation.

Hekaton, on the other hand, uses Lock-free data structures, meaning no latches. They’ve stopped using locking for concurrency control. They use versions with timestamps + optimistic concurrency control for Hekaton. And queries are actually, literally, compiled into a DLL. That’s right. COMPILED. Queries have been “compiled” into an interpretation layer all this time. Not literally compiled. But, with this, they’re getting turned into DLLs.

There are now three query engines in SQL Server. Relational, Column Store and Hekaton. These are three distinct stacks. Queries can span all three.

9:06: First, you create a “memory optimized” table. That table does have limits (look them up) in structure and data types supported

Second, populate the table, but, you have to make sure that data will absolutely fit in memory. You can put 5gb of data into a system with 2gb of memory. NO PAGING TO DISK. It’s in-memory, right?

Third, run queries, but there are some language restrictions.

9:12: HOW:

Lock Free Data structures. the data structures are truly rocket science. They make query optimization look simple (OW!). These were invented by Maurice Herlihy at Brown University. It’s not really lock-free, but since it’s not about concurrency, it’s about being latch-free. Dr. DeWitt tells us he could explain it in about 30 minutes, but instead we get a demo.

he’s showing that latches slow down more and more as the number of threads hit the system. Yet the lock-free approach actually increases. Then, when updates occur, everything stops until the update completes. The lock-free mechanism doesn’t stop at all. It doesn’t even slow. The lock-free mechanisms took 5 years alone.

Multi version, optimistic, time-stamped concurrency control: The assumption is that conflicts are rare. Transactions are run to “completion” with no locks. Then conflicts are resolved later. Multiversion means that updates create a new version of the row. Each row version has a time range. Transactions use ther being timestamp too select correct version. Timestamps are used to create a total order for transactions to obtain equivalent of a serial order. This reduces the number of threads and that reduction rediuces the likelihood of locking.

Read committed versions start. Updates create new “tentative” vversions and then the DB tracks the rows read, written and scanned. Then updates go through a pre-commit step which gives you validation and then the concurrency control goes through it’s work in post processing.

Timestamps are just counters. So you get begin and end times so you know how to track mechanisms. End times are always unique and that’s how you can manage who goes first in terms of concerency.

So a row gets tagged with a begginning ts and then when it completes a unique end time time stamp. When it starts, you get a new version of the row, with pointers linking the versions of the row. There will be a “magic identifier” assigned from the transaction to the versions of the row. An end time stamp to the older row and now end at all, but a begginning time stampe on the second row. So, this means no latches were used and there were no locks set and there were no blocks of other transactions. This creates the basis of multiversion concurrency control.

So if you have two transactions running concurrently, you’ll see the first transaction create a version of a row with copies and versions. Then a second transaction tries to read the row. If it’s timestamp of the second version which was earlier than the first transaction, it’ll use the older version, because the time stamp of the end time of the second transaction must be later than the current time, because it’s not complete yet.

Yeah, that sounds confusing, but looking at the slides you’ll get it.

Then, a clean up process has to occur. When the begin time stamp of the oldest version in the system ticks past a more recent version, then the older version will get removed. This clean up is cooperative, non-blocking, incremental, parallel and self-throttling.

Each version contains a valid time stamp range. You get transactions through time stamps and versions. Then a transaction will read only versions of rows that valid when time overalps the beginning of the range for a transaction.

THEN, we have to go through Validation.

1. Transaction obtains a unique end time stamp

2. Determine if the transaction can be safely committted

3. Validation steps depend on isolation level (and check the slides for details).

Each version read is checked to see if they’re still “visible” or “valid” at the end of a transaction. This also helps with phantom avoidance. But, everything is in memory and we’re not getting locks, so while expensive, it’s actually still cheaper than the old versions of latching and locking.

Post-processing goes through three phases. You get a log record with all versions of the row and the primary keys of all deleted rows. A single i/o is written to the log. For all rows in the transaction writeset the transaction id is replaced with the end time stamp.

I sort of understand all this. The trick will be to remember it and then learn how to explain it to others.

But, you have to have checkpoints and recovery. Data is stored in the logs during checkpoint operations, roughly the same as normal. Recovery loads the know checkpoints and scans logs to recover all work since then. It has full integration with High Availability.

9:43

Then we got to queries and quer plans. You can run regular queries in what they call interop but you sacrifice performance. Instead, you want to compile it. You get physical plans, kind of the same way as you used to (not quite the same, but I was hitting a snag when he explained that part, check the slides), but then it goes through a translator which generates c code. Evidently, really ugly c code. But then the compilers is called and then you get a DLL. This is 100% totally specific with no functions. Then you get a DLL loaded and invoked. You never recompile that query again.

The number of instructions is interesting. A classic table can take 700 instructions to find a row. With Hekaton, 332 and with a native qp, 75. Even if you don’t find a row, it’s 300, 110 and 31.

Interop can get up to 3x improvement, but there are still language limits. Same issues with native mode, but you 10-30x improvements with that.

Finally, he’s going through a whole bunch of performance improvements by various, REAL, companies using it now.

The whole thing is that memory prices have been declining. We’re seeing lots of CPU cores designed for concurrency, but we’re still hurting from CPUs through the lack of compiled code. But, it’s supported by the hardware trends.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating