Blog Post

PASS Summit - David DeWitt Keynote

,

Microsoft sent Dr. David DeWitt to do the last keynote of the PASS Summit. He's a technical fellow in the data and storage platform. He's looking forward to the future of SQL Server and is talking to future of technology and trends.

From 1 to 1000 MIPS is his talk.

His background, 32 years at the University of Wisconsin. He runs the Jim Gray Systems lab in Madison, with the idea of taking technology out of UW into the Microsoft product suite. It's not part of Microsoft Research, and has 3 faculty and 8 grad students.

Working on releases 1 and 2 of SQL Server Parallel Database Warehouse. Last year he talked about parallel database technology. The idea is to look at trends in hardware and then database system specialization in the next decade. I tend to agree with this and we might need to specialize with new database technology.

Let's go back to 1980. The VAX 11/780, 1MIPS, 1kb cache, 8MB memory, 80MB disk drives, 1MB/sec xfer all for $250k! This was the first 32 bit relational database. INGRES and Oracle were the main relational database vendors and the basic DBMS architecture is the same thing we use today.

However hardware is different. We now run around 2GIPS, 1MB caches on chip, 2GB/CPU RAM, and 800GB drives. Everything is 1,000x greater or more.

Disk Drives

If we look at transfer rates on disks, we're only about 65x improvement, and seek times are only about a 10x improvement. Quite a disparity.

1985, early benchmarks. 100TPS, 400 disk IOs/sec, with 14 drives. Roughly 30I IOs/sec per drive.

Go to 2009, we have 25,000TPS, 100,000 disk IOs/Sec, 330 drives with 300 IOs/sec equivalent!

So the 1000x improvement in CPUs has been almost negated by the 10X disk access/sec changes. That is amazing. And sobering.

Transfer Bandwidth/byte, a new metric. Dividing transfer rate by capacity, then you get .015 in 1980 drives, .0001 in today's drives. So the Dr. DeWitt sees drives as slower relatively. Hard to reconcile that in my little mind.

CPU Trends

1980 the VAXC had a single CPU, 8kb L1 cache, 6 cycles ot access memory. 10 cycles /instruction.

today we have multiple CPUs, 64kb private L! caches, 2-8MB shared L2 cache, 1 cycle/instruction. 2 cycles to access L1, 20 to access L2

What is the impact on DBMS performance. Looking at a DB2 on Linux, TPC-H queries on 10GB database and a 1GB buffer pool.

In looking at a CPU and where time is spent. Lots of time waiting for things, about 10% useful computation time. 50% of the time waiting on memory. Why? The L2 data cache is waiting on transfers. That is amazing.

Read 3 pages from disk, up to 9 L1 and L2 cache misses. An L2 cache miss can stall the CPU for up to 200 cycles.

An alternative physical layout is a column store. What does this mean? I wasn't sure this was a great idea when I read it last year, but I would recommend you read to understand. Now if we look at the cache misses, there's a difference. Same query we saw before now has many less cache misses. Less I/O is wasted.

In an example he gave, for a 10M row table. In a row store, for a simple query, 3 columns, you return 2GB of data and it takes 25 sec. A row stored only scans 280MB at 3.5 sec. That's amazing, though I'm not sure how useful this is for most of us. It's definitely built for data warehousing, and might not apply for OLTP loads.

Physical Representation

We have our row store, a column store, and hybrids. We could include an ID with the column as well. I know it's confusing, and I'm not giving enough detail, but it's confusing to me as well. This is probably in a paper, or series of papers somewhere. I'll try to find references.

Compression starts to work better because CPU is 1000x faster, but disks are 65x faster. So you spend the time doing decompression and it still works better. Some types that Dr. DeWitt talked about.

Run Length encoding, a good way to compress data. works well with sorted data.

Bit Vector encoding, another way to compress. Use bits to represent values if there are relatively few values. Combine this with RLE and increase compression.

Dictionary encoding - Create a dictionary for the values.

With a row store, you have different issues. You cannot run length compress in that store. Or not easily or well.

A column store scanner, to satisfy a query, is more complex. A much different path of working through a query. This is very interesting stuff, and very well explained.

However updates are an issue. No free lunch. The updates are hard and creates lots of work. Research is being done on how to make these more efficient, but it's a tough problem to solve.

There is more interesting stuff, but it's hard to relate. And I spent more time listening than typing. I'll write on this more, but it is interesting. There is work being done for SQL Server 10.5, and SQL Server 11 in this area. Some of this, the Vertipaq engine, is in SQL Server 10.5, which is SQL Server 2008 R2, I believe. Dr. DeWitt hinted at other things, but I'm not sure what will be used in future products.

Daniel Abadi (Yale) has a great technology blog. David DeWitt recommends him.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating