Blog Post

Is BETWEEN faster than GTE and LTE?

,

I know I have been light on blog posting this year. They always say that life tends to get in the way of these sorts of tasks, and this year that is certainly true. Anyways, I want this blog to be more than just SQL Server virtualization with VMware vSphere, so I’m intending to branch out – not just with more hypervisors (Microsoft Hyper-V 2012 tips and tricks coming soon!) but want some more fun tidbits about core SQL Server as well. 

So… a few days ago one of my favorite clients passed on a question from one of his other consultants. The question was – “Why is ‘greater than’ and ‘less than’ faster than ‘ between’ for range filters in where clauses?”

What a great question! Let’s experiment and see what conclusions we can draw.

I’ll test with date ranges.

First, let’s create a container to test with. Pick any throw-away database and let’s get started!

  1. --create container for our dummy data
  2. CREATE TABLE [dbo].[BTW](
  3.     [ID] [int] IDENTITY(1,1) NOT NULL,
  4.     [DT] [datetime] NULL,
  5. CONSTRAINT [PK_BTW] PRIMARY KEY CLUSTERED
  6. (
  7.     [ID] ASC
  8. )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) ON [PRIMARY]
  9. ) ON [PRIMARY]

Now, we’ll prime the table with some random date values. This script was adapted from a script from Ben Nadel.

  1. --prime table with random date values.
  2. --script adapted from http://www.bennadel.com/blog/310-Ask-Ben-Getting-A-Random-Date-From-A-Date-Range-In-SQL.htm
  3. declare @i int
  4. set @i = 1
  5.  
  6. -- First, let's declare the date range. I am declaring this
  7. -- here for the demo, but this could be done anyway you like.
  8. DECLARE @date_from DATETIME;
  9. DECLARE @date_to DATETIME;
  10.  
  11. -- Set the start and date dates. In this case, we are using
  12. -- the month of october, 2006.
  13. SET @date_from = '1900-01-01';
  14. SET @date_to = '1999-12-31';
  15.  
  16. while (@i < 100000) begin
  17.  
  18. -- Select random dates.
  19. insert into dbo.btw (DT)
  20. SELECT
  21. (
  22. -- Remember, we want to add a random number to the
  23. -- start date. In SQL we can add days (as integers)
  24. -- to a date to increase the actually date/time
  25. -- object value.
  26. @date_from +
  27. (
  28. -- This will force our random number to be GTE 0.
  29. ABS(
  30.  
  31. -- This will give us a HUGE random number that
  32. -- might be negative or positive.
  33. CAST(
  34. CAST( NewID() AS BINARY(8) )
  35. AS INT
  36. )
  37. )
  38.  
  39. -- Our random number might be HUGE. We can't have
  40. -- exceed the date range that we are given.
  41. -- Therefore, we have to take the modulus of the
  42. -- date range difference. This will give us between
  43. -- zero and one less than the date range.
  44. %
  45.  
  46. -- To get the number of days in the date range, we
  47. -- can simply substrate the start date from the
  48. -- end date. At this point though, we have to cast
  49. -- to INT as SQL will not make any automatic
  50. -- conversions for us.
  51. CAST(
  52. (@date_to - @date_from)
  53. AS INT
  54. )
  55. )
  56. )
  57.  
  58. set @i = @i + 1
  59.  
  60. end

OK. Run the block a few times to get a good number of rows in your test table. For the numbers below, I am running these on my home lab with SQL Server 2008R2 and 736,000 records in this table. I’ll just pluck some random dates out of thin air and off we go.

  1. set statistics io on
  2. set statistics time on
  3.  
  4. select * from dbo.BTW where DT between '1945-01-08' and '1965-01-01'
  5.  
  6. select * from dbo.BTW where DT >= '1945-01-08' and DT <= '1965-01-01'
  7.  
  8. set statistics io off
  9. set statistics time off

We see some interesting results.

  1. (147622 row(s) affected)
  2. Table 'BTW'. Scan count 1, logical reads 1252, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
  3.  
  4. SQL Server Execution Times:
  5.    CPU time = 47 ms,  elapsed time = 1483&nbsp;ms.
  6. SQL Server parse and compile time:
  7.    CPU time = 0 ms, elapsed time = 0 ms.
  8.  
  9. (147622 row(s) affected)
  10. Table 'BTW'. Scan count 1, logical reads 1252, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
  11.  
  12. SQL Server Execution Times:
  13.    CPU time = 47 ms,  elapsed time = 1964&nbsp;ms.
  14. SQL Server parse and compile time:
  15.    CPU time = 0 ms, elapsed time = 0 ms.

I have identical output here for each query.  I ran this twenty times and got relatively the same results each time. The query execution plans are identical as well.

between01

Hover over the Clustered Index Scan for each query. They are identical. It even converted the between to GTE and LTE for you. Sweet.

between02

These look identical. So… why are the runtimes consistently different? What if we change the order of the queries? What about cleaning up memory to make sure we have no background ‘stuff’ getting in the way? (Don’t run those commands on production!)

  1. --test queries

  1. DBCC FREEPROCCACHE

  1. DBCC DROPCLEANBUFFERS

  1. set statistics io on
  2. set statistics time on
  3.  
  4. select * from dbo.BTW where DT >= '1945-01-08' and DT <= '1965-01-01'
  5. select * from dbo.BTW where DT between '1945-01-08' and '1965-01-01'
  6.  
  7. set statistics io off
  8. set statistics time off

Even more strange. The runtimes stayed with the locations and not the queries. It just gives us some proof that these really are equivalent.

Maybe we should create an index on the column we’re filtering on? Let’s try it.

  1. --add a nonclustered index on DT column.
  2. create nonclustered index IX_BTW_DT on dbo.BTW (DT) with ( fillfactor = 50 )
  3. --lots of room for inserts, as this is random data.
  4.  
  5. --rerun queries.
  6. set statistics io on
  7. set statistics time on
  8.  
  9. select * from dbo.BTW where DT between '1945-01-08' and '1965-01-01'
  10.  
  11. select * from dbo.BTW where DT >= '1945-01-08' and DT <= '1965-01-01'
  12.  
  13. set statistics io off
  14. set statistics time off

Did it make a difference in the queries? Nope. The only thing that changed was the number of logical reads. Run them in reverse or forward order and these came out all over the place.

  1. SQL Server parse and compile time:
  2.    CPU time = 0 ms, elapsed time = 0 ms.
  3.  
  4. (147622 row(s) affected)
  5. Table 'BTW'. Scan count 1, logical reads 663, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
  6.  
  7. SQL Server Execution Times:
  8.    CPU time = 32 ms,  elapsed time = 800 ms.
  9. SQL Server parse and compile time:
  10.    CPU time = 0 ms, elapsed time = 0 ms.
  11.  
  12. (147622 row(s) affected)
  13. Table 'BTW'. Scan count 1, logical reads 663, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
  14.  
  15. SQL Server Execution Times:
  16.    CPU time = 47 ms,  elapsed time = 1922 ms.
  17.  
  18. SQL Server Execution Times:
  19.    CPU time = 0 ms,  elapsed time = 0 ms.

Now, notice something with the above screenshot of the execution plan. We’ve got implicit conversions on the date! Ack! If you ever see those in your query execution plans, figure out what’s going on and fix it. It’s extra overhead and might not always work the way you expect. Jes Borland has a great writeup on how to detect and correct implicit type conversions. It’s a great read!

I also tinkered around by putting a delay in between the two queries in an attempt to separate them a bit more. 

  1. --implicit type conversions in the queries above? fix them!
  2. set statistics io on
  3. set statistics time on
  4.  
  5. select * from dbo.BTW where DT between cast('1945-01-08' as datetime) and cast('1965-01-01' as datetime)
  6. waitfor delay '00:00:05'
  7. select * from dbo.BTW where DT >= cast('1945-01-08' as datetime) and DT <= cast('1965-01-01' as datetime)
  8.  
  9. set statistics io off
  10. set statistics time off

That fixed the differences in runtimes. Now these are spaced out and the queries are executing almost identically.

  1. SQL Server parse and compile time:
  2.    CPU time = 0 ms, elapsed time = 0 ms.
  3.  
  4. (147622 row(s) affected)
  5. Table 'BTW'. Scan count 1, logical reads 663, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
  6.  
  7. SQL Server Execution Times:
  8.    CPU time = 47 ms,  elapsed time = 1280 ms.
  9.  
  10. SQL Server Execution Times:
  11.    CPU time = 0 ms,  elapsed time = 5001 ms.
  12. SQL Server parse and compile time:
  13.    CPU time = 0 ms, elapsed time = 0 ms.
  14.  
  15. (147622 row(s) affected)
  16. Table 'BTW'. Scan count 1, logical reads 663, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
  17.  
  18. SQL Server Execution Times:
  19.    CPU time = 16 ms,  elapsed time = 1127 ms.
  20.  
  21. SQL Server Execution Times:
  22.    CPU time = 0 ms,  elapsed time = 0 ms.

Now, are these results conclusive to where I can objectively state that one is faster than the other? Of course not. Between is just shorthand for GTE and LTE anyways. Between looks to be translated to GTE and LTE during query compilation, and the execution plan reflects GTE and LTE. The execution times should be the same, give or take background noise on the server.

One thing to keep in mind with BETWEEN is how you might get some skewed filtered data if you are not careful. Between is technically greater than and equal to PLUS less than and equal to. If you are using date ranges like the examples above, your filter translates to:

  1. select * from dbo.BTW where DT between cast('1945-01-08' as datetime) and cast('1965-01-01' as datetime)
  2.  
  3. --translates to:
  4. select * from dbo.BTW where DT >= cast('1945-01-08 00:00:00' as datetime) and DT <= cast('1965-01-01 00:00:00' as datetime)

Oops. You might be filtering out a day’s worth of data! Aaron Bertrand has a great write-up on this topic at his blog here. It’s well worth the read if you do any sort of database development at all!

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating