daveb-840383 (6/22/2010)
Let me try to explain the situation a little better.If a user #1 is connected from 1:00 PM - 6:00 PM
and user #2 is connected from 1:01 PM - 7:59 PM
and user #3 is connected from 3:00 PM - 8:00 PM
and user #4 is connected from 5:15 PM - 5:59 PM
my hourly totals would look like this...
Hour | Total Users Connected
12 | 0
13 | 2
14 | 2
15 | 3
16 | 3
17 | 4
18 | 3
19 | 2
20 | 1
21 | 0
I also just realized that the data that I furnished was a little off. The overlap will only ever occur with the start time, not the stop time. Notice record 683 starts on the 31st and ends on the 1st.
--===== Insert the test data into the test table
INSERT INTO #whosonwhen ([ID], [Start], [Stop])
SELECT '6876','2010-06-01 06:13:54.000','2010-06-01 16:00:26.000' UNION ALL
SELECT '6919','2010-06-01 07:56:59.000','2010-06-01 17:57:00.000' UNION ALL
SELECT '6863','2010-06-01 08:37:00.000','2010-06-01 15:30:52.000' UNION ALL
SELECT '6851','2010-06-01 08:18:00.000','2010-06-01 15:00:26.000' UNION ALL
SELECT '6852','2010-06-01 09:02:17.000','2010-06-01 15:00:29.000' UNION ALL
SELECT '683','2010-05-31 23:02:17.000','2010-06-01 02:00:29.000'
So to answer your questions Jeff...
Jeff Moden (6/22/2010)
Are you saying that you want an hourly count by day? And what do you want to show if you have a time span of more than 1 day?
I want an hourly count, but just the current day. The current day will always be the day of the stop time. I want to force the start time to be the current day if the start time is a previous day. Maybe just force it to be one second after midnight.
Thanks!
Apologies for the delay but you might find it's worth it when the requirements change sometime in the future.
First, let's build a handy little tool to make various aspects of coding a bit easier. The following code builds one of the many renditions of an on-the-fly "Tally" function.
CREATE FUNCTION [dbo].[fnTally]
/**********************************************************************************************************************
Purpose:
Return a column of BIGINTs from @ZeroOrOne up to and including @MaxN with a max value of 1 Trillion.
As a performance note, it takes about 00:02:10 (hh:mm:ss) to generate 1 Billion numbers to a throw-away variable.
Usage:
--===== Syntax example (Returns BIGINT)
SELECT t.N
FROM dbo.fnTally(@ZeroOrOne,@MaxN) t
;
Notes:
1. Based on Itzik Ben-Gan's cascading CTE (cCTE) method for creating a "readless" Tally Table source of BIGINTs.
Refer to the following URLs for how it works and introduction for how it replaces certain loops.
http://www.sqlservercentral.com/articles/T-SQL/62867/
http://sqlmag.com/sql-server/virtual-auxiliary-table-numbers
2. To start a sequence at 0, @ZeroOrOne must be 0 or NULL. Any other value that's convertable to the BIT data-type
will cause the sequence to start at 1.
3. If @ZeroOrOne = 1 and @MaxN = 0, no rows will be returned.
5. If @MaxN is negative or NULL, a "TOP" error will be returned.
6. @MaxN must be a positive number from >= the value of @ZeroOrOne up to and including 1 Billion. If a larger
number is used, the function will silently truncate after 1 Billion. If you actually need a sequence with
that many values, you should consider using a different tool. ;-)
7. There will be a substantial reduction in performance if "N" is sorted in descending order. If a descending
sort is required, use code similar to the following. Performance will decrease by about 27% but it's still
very fast especially compared with just doing a simple descending sort on "N", which is about 20 times slower.
If @ZeroOrOne is a 0, in this case, remove the "+1" from the code.
DECLARE @MaxN BIGINT;
SELECT @MaxN = 1000;
SELECT DescendingN = @MaxN-N+1
FROM dbo.fnTally(1,@MaxN);
8. There is no performance penalty for sorting "N" in ascending order because the output is explicity sorted by
ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
Revision History:
Rev 00 - Unknown - Jeff Moden
- Initial creation with error handling for @MaxN.
Rev 01 - 09 Feb 2013 - Jeff Moden
- Modified to start at 0 or 1.
Rev 02 - 16 May 2013 - Jeff Moden
- Removed error handling for @MaxN because of exceptional cases.
Rev 03 - 22 Apr 2015 - Jeff Moden
- Modify to handle 1 Trillion rows for experimental purposes.
**********************************************************************************************************************/
(@ZeroOrOne BIT, @MaxN BIGINT)
RETURNS TABLE WITH SCHEMABINDING AS
RETURN WITH
E1(N) AS (SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1) --10E1 or 10 rows
, E4(N) AS (SELECT 1 FROM E1 a, E1 b, E1 c, E1 d) --10E4 or 10 Thousand rows
,E12(N) AS (SELECT 1 FROM E4 a, E4 b, E4 c) --10E12 or 1 Trillion rows
SELECT N = 0 WHERE ISNULL(@ZeroOrOne,0)= 0 --Conditionally start at 0.
UNION ALL
SELECT TOP(@MaxN) N = ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E12 -- Values from 1 to @MaxN
;
Ok... now, let's test against a more substantial amount of data. The following code will build a million row test table according to what I perceive from your posts. You can also use this to test other folks' code for performance, as well. It uses the "fnTally" function as a pseudo-cursor to avoid the slothfulness of a loop or recursive CTE to do the same.
Details are in the code in the form of comments to explain certain assumptions on my part and to explain indexes that either are already on your table or should be.
--===== If it already exists, drop the test table to make reruns easier in SSMS
IF OBJECT_ID('tempdb..#WhosOnWhen','U') IS NOT NULL
DROP TABLE #WhosOnWhen
;
GO
--===== Create a million rows of data where the START date/time is somewhere in 2014 or 2015.
-- The Stop datetime will be up to and not including 3 days later than the Start datetime.
-- Don't let the million rows scare you. This takes just several seconds.
WITH
cteStartDates AS
(
SELECT TOP 1000000
Start = ISNULL(RAND(CHECKSUM(NEWID()))*DATEDIFF(dd,'2014','2016')+CAST('2014' AS DATETIME),0)
FROM dbo.fnTally(1,1000000) t
)
SELECT ID = ISNULL(ROW_NUMBER()OVER(ORDER BY Start),0) --ID is realistically in order by Start and is not null
,Start
,Stop = Start + RAND(CHECKSUM(NEWID()))*3 --Stop is a random amount of up to 3 days time after the start
INTO #WhosOnWhen
FROM cteStartDates
;
--===== Add the expected Clustered Index and the expected PK.
-- In this case, they are different. Note how I guaranteed the CI to be unique.
CREATE UNIQUE CLUSTERED INDEX CI_#WhosOnWhen
ON #WhosOnWhen (Start,ID)
;
ALTER TABLE #WhosOnWhen
ADD PRIMARY KEY NONCLUSTERED (ID)
;
Here's one fairly high performance method for solving your problem for a single day.
--===== Setup variables for the day range.
-- The dates may be changed to include
-- many more days, if required. They
-- will also resolve to only whole days
-- even if a time is included.
DECLARE @pStartDate DATETIME
,@pStopDate DATETIME
;
SELECT @pStartDate = '12 Jul 2015'
,@pStopDate = '12 Jul 2015'
;
--===== This solution uses a high speed on-the-fly expansion of dates.
WITH
cteParms AS
(
SELECT StartDate = DATEDIFF(dd, 0,@pStartDate) --Ensure that we start on a whole date
,StopDate = DATEDIFF(dd,-1,@pStopDate) --Ensure that we end on next on a whole date
)
,cteExpandHours AS
( --=== Expand all date ranges by whole hour
SELECT HourPeriodStart = DATEADD(hh,DATEDIFF(hh,0,DATEADD(hh,t.N,d.Start)),0)
FROM #WhosOnWhen d
CROSS APPLY dbo.fnTally(0,DATEDIFF(hh,d.Start,d.Stop)) t
CROSS JOIN cteParms p
WHERE d.Stop >= p.StartDate --These relations correctly picks up any overlaps
AND d.Start < p.StopDate
)
SELECT HourPeriodStart
,HourPeriod = DATEPART(hh,HourPeriodStart)
,CountPerHour = COUNT(*)
FROM cteExpandHours
CROSS JOIN cteParms p
WHERE HourPeriodStart >= p.StartDate --Criteria repeated to limit output of report
AND HourPeriodStart < p.StopDate
GROUP BY HourPeriodStart
ORDER BY HourPeriodStart
;
And here are the results from one of the runs. Keep in mind that the test data is randomly constructed and your counts will vary because of that.
HourPeriodStart HourPeriod CountPerHour
----------------------- ----------- ------------
2015-07-12 00:00:00.000 0 2021
2015-07-12 01:00:00.000 1 2025
2015-07-12 02:00:00.000 2 2028
2015-07-12 03:00:00.000 3 2040
2015-07-12 04:00:00.000 4 2036
2015-07-12 05:00:00.000 5 2027
2015-07-12 06:00:00.000 6 2023
2015-07-12 07:00:00.000 7 2029
2015-07-12 08:00:00.000 8 2048
2015-07-12 09:00:00.000 9 2052
2015-07-12 10:00:00.000 10 2073
2015-07-12 11:00:00.000 11 2067
2015-07-12 12:00:00.000 12 2078
2015-07-12 13:00:00.000 13 2088
2015-07-12 14:00:00.000 14 2083
2015-07-12 15:00:00.000 15 2084
2015-07-12 16:00:00.000 16 2074
2015-07-12 17:00:00.000 17 2058
2015-07-12 18:00:00.000 18 2058
2015-07-12 19:00:00.000 19 2063
2015-07-12 20:00:00.000 20 2069
2015-07-12 21:00:00.000 21 2070
2015-07-12 22:00:00.000 22 2063
2015-07-12 23:00:00.000 23 2069
(24 row(s) affected)
And, yes... I absolutely realize that the OP may no longer be paying attention to a 5 year old post but someone finding this code might also find it useful. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.