July 12, 2016 at 10:03 pm
I have a scenario in which I get startdate as a parameter and based on the start date I need to calculate the end date for each record using duration.
But, for the second record the start date has be the 1st record end date for calculation.
sample data below
Declare @startDate date = '01/01/2016'
DECLARE @Table as table
(Pid bigint,
Cid bigint,
SortOrder int,
startdate date,
duration int,
enddate date)
INSERT INTO @Table
VALUES
(1,11,1,@startDate,1,Null),
(1,21,2,NULL,3,Null),
(1,31,3,Null,7,Null),
(1,14,4,NULL,11,Null)
--Get Data
SELECT * FROM @TABLE
Result Set:
PidCidOrder startdate Durationenddate
1111 1/1/2016 1 1/2/2016
1212 1/2/2016 3 1/5/2016
1313 1/5/2016 7 1/12/2016
1144 1/15/2016 11 1/26/2016
Any suggestions would be great.
Thanks.
July 13, 2016 at 12:26 am
Quick suggestion towards a SQL Server 2008 solution, which is fine on smaller sets but may need to be improved for large sets. If you are on SQL Server 2012 or later then let us know and we'll produce a more efficient solution using Window functions.
😎
USE TEEST;
GO
SET NOCOUNT ON;
Declare @startDate date = '01/01/2016'
DECLARE @Table as table
(Pid bigint,
Cid bigint,
SortOrder int,
startdate date,
duration int,
enddate date)
INSERT INTO @Table
VALUES
(1,11,1,@startDate,1,Null),
(1,21,2,NULL,3,Null),
(1,31,3,Null,7,Null),
(1,14,4,NULL,11,Null)
--Get Data
SELECT
SD.Pid
,SD.Cid
,SD.SortOrder
,DATEADD(DAY,XD.XDURATION,MAX(SD.startdate) OVER
(
PARTITION BY SD.pid
)
) AS startdate
,DATEADD(DAY,SD.duration,DATEADD(DAY,XD.XDURATION,MAX(SD.startdate) OVER
(
PARTITION BY SD.pid
)
)) AS enddate
,SD.duration
FROM @TABLE SD
CROSS APPLY
(
SELECT
ISNULL(SUM(XSD.duration),0) AS XDURATION
FROM @TABLE XSD
WHERE SD.Pid = XSD.Pid
AND SD.SortOrder > XSD.SortOrder
) AS XD(XDURATION);
Output
Pid Cid SortOrder startdate enddate duration
----- ---- ----------- ---------- ---------- -----------
1 11 1 2016-01-01 2016-01-02 1
1 21 2 2016-01-02 2016-01-05 3
1 31 3 2016-01-05 2016-01-12 7
1 14 4 2016-01-12 2016-01-23 11
July 13, 2016 at 7:01 pm
See this article (http://www.sqlservercentral.com/articles/T-SQL/61539/) for why Eirikur says "for small data sets". And it doesn't take many rows for the problem to become HUGE.
I don't, however, see this problem (Triangular Join) materializing the expected Cartesian Product for internal row counts in Eirikur's code but we won't know for sure until more data is added to the test harness.
Ah... didn't look carefully enough. The Triangular Join problem does exist here. There is a table scan for 6 rows of output. The table is actually scanned once for each row in the source. A Triangular Join with an inequality will be the sum of the numbers from 1 to N-1 (N is the number of rows), which would be 1+2+3 or 6. For larger numbers of rows, the value would be ((X-1)^2+(X-1))/2 or (9+3)/2 = 6, in this case. For just 100 rows that would be 4950 internally generated rows. For just 10,000 rows, it would be 49,995,000 (almost 50 million) rows. It adds up really quickly and slows down even more quickly.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 13, 2016 at 7:34 pm
Thank you. Works like a charm
Viewing 4 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply