January 11, 2015 at 11:07 am
I have the following:
iddue_dateupdate_datemonth
2004/14/20192/4/20142
2006/16/20204/6/20144
2008/18/20216/8/20146
3006/13/20143/3/20143
3009/19/20146/9/20146
30012/22/20149/12/20149
How do yo update the due date for each month?
iddue_dateupdate_datemonth
2004/14/20192/4/20142
2004/14/2019 3
2006/16/20204/6/20144
2006/16/2020 5
2008/18/20216/8/20146
2008/18/2021 7
2008/18/2021 8
2008/18/2021 9
2008/18/2021 10
2008/18/2021 11
2008/18/2021 12
3006/13/20143/3/20143
3006/13/2014 4
3006/13/2014 5
3009/19/20146/9/20146
3009/19/2014 7
3009/19/2014 8
30012/22/20149/12/20149
30012/22/2014 10
30012/22/2014 11
30012/22/2014 12
January 11, 2015 at 4:13 pm
marybeth1975 (1/11/2015)
I have the following:
iddue_dateupdate_datemonth2004/14/20192/4/20142
2006/16/20204/6/20144
2008/18/20216/8/20146
3006/13/20143/3/20143
3009/19/20146/9/20146
30012/22/20149/12/20149
How do yo update the due date for each month?
iddue_dateupdate_datemonth2004/14/20192/4/20142
2004/14/2019 3
2006/16/20204/6/20144
2006/16/2020 5
2008/18/20216/8/20146
2008/18/2021 7
2008/18/2021 8
2008/18/2021 9
2008/18/2021 10
2008/18/2021 11
2008/18/2021 12
3006/13/20143/3/20143
3006/13/2014 4
3006/13/2014 5
3009/19/20146/9/20146
3009/19/2014 7
3009/19/2014 8
30012/22/20149/12/20149
30012/22/2014 10
30012/22/2014 11
30012/22/2014 12
Hi Marybeth,
It would be real helpful on future posts if you'd post your data example in a readily consumable format because most of us like to test our code before posting it. Having the data in a readily consumable format makes it easier to do that. Please see the article at the first link under "Helpful Links" in my signature line below for more information on how to do that.
Here's another way to post readily consumable data...
--===== Create and populate the test table on-the-fly.
SELECT id
,due_date = CAST(due_date AS DATETIME)
,update_date = CAST(update_date AS DATETIME)
,month
INTO #TestTable
FROM (
SELECT 200,'4/14/2019' ,'2/4/2014' ,2 UNION ALL
SELECT 200,'6/16/2020' ,'4/6/2014' ,4 UNION ALL
SELECT 200,'8/18/2021' ,'6/8/2014' ,6 UNION ALL
SELECT 300,'6/13/2014' ,'3/3/2014' ,3 UNION ALL
SELECT 300,'9/19/2014' ,'6/9/2014' ,6 UNION ALL
SELECT 300,'12/22/2014','9/12/2014',9
) d (id,due_date,update_date,month)
;
Using that test data, here's one possible solution but don't try to run it just yet.
WITH
cteEnumerate AS
(
SELECT RowNum = ROW_NUMBER() OVER (PARTITION BY id ORDER BY due_date)
,id
,due_date
,update_date
,month
FROM #TestTable
)
SELECT lo.id
,due_date = CONVERT(CHAR(10),lo.due_date,101)
,update_date = CASE WHEN t.N = 0 THEN CONVERT(CHAR(10),lo.update_date,101) ELSE '' END
,month = lo.month+t.n
FROM cteEnumerate lo
LEFT JOIN cteEnumerate hi
ON lo.id = hi.id
AND lo.RowNum+1 = hi.RowNum
CROSS APPLY dbo.fnTally(0,ISNULL(hi.month,13)-lo.month-1)t
;
That solution requires the use of the "Swiss Army Knife" of SQL Server, a Tally Table or a Tally Function. Here's how to build the fnTally function...
CREATE FUNCTION [dbo].[fnTally]
/**********************************************************************************************************************
Purpose:
Return a column of BIGINTs from @ZeroOrOne up to and including @MaxN with a max value of 1 Billion.
As a performance note, it takes about 00:02:10 (hh:mm:ss) to generate 1 Billion numbers to a throw-away variable.
Usage:
--===== Syntax example (Returns BIGINT)
SELECT t.N
FROM dbo.fnTally(@ZeroOrOne,@MaxN) t
;
Notes:
1. Based on Itzik Ben-Gan's cascading CTE (cCTE) method for creating a "readless" Tally Table source of BIGINTs.
Refer to the following URL for how it works. http://sqlmag.com/sql-server/virtual-auxiliary-table-numbers
2. To start a sequence at 0, @ZeroOrOne must be 0 or NULL. Any other value that's convertable to the BIT data-type
will cause the sequence to start at 1.
3. If @ZeroOrOne = 1 and @MaxN = 0, no rows will be returned.
5. If @MaxN is negative or NULL, a "TOP" error will be returned.
6. @MaxN must be a positive number from >= the value of @ZeroOrOne up to and including 1 Billion. If a larger
number is used, the function will silently truncate after 1 Billion. If you actually need a sequence with
that many values, you should consider using a different tool. ;-)
7. There will be a substantial reduction in performance if "N" is sorted in descending order. If a descending
sort is required, use code similar to the following. Performance will decrease by about 27% but it's still
very fast especially compared with just doing a simple descending sort on "N", which is about 20 times slower.
If @ZeroOrOne is a 0, in this case, remove the "+1" from the code.
DECLARE @MaxN BIGINT;
SELECT @MaxN = 1000;
SELECT DescendingN = @MaxN-N+1
FROM dbo.fnTally(1,@MaxN);
8. There is no performance penalty for sorting "N" in ascending order because the output is explicity sorted by
ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
Revision History:
Rev 00 - Unknown - Jeff Moden
- Initial creation with error handling for @MaxN.
Rev 01 - 09 Feb 2013 - Jeff Moden
- Modified to start at 0 or 1.
Rev 02 - 16 May 2013 - Jeff Moden
- Removed error handling for @MaxN because of exceptional cases.
**********************************************************************************************************************/
(@ZeroOrOne BIT, @MaxN INT)
RETURNS TABLE WITH SCHEMABINDING AS
RETURN WITH
E1(N) AS (SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1) --10E1 or 10 rows
, E3(N) AS (SELECT 1 FROM E1 a, E1 b, E1 c) --10E3 or 1 Thousand rows
, E9(N) AS (SELECT 1 FROM E3 a, E3 b, E3 c) --10E9 or 1 Billion rows
SELECT N = 0 WHERE ISNULL(@ZeroOrOne,0)= 0 --Conditionally start at 0.
UNION ALL
SELECT TOP(@MaxN) N = ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E9 -- Values from 1 to @MaxN
;
... and, if you've never heard of such a thing, here's an introductory article on what Tally Tables are and how they can be used to replace certain loops.
http://www.sqlservercentral.com/articles/T-SQL/62867/
The output I get is identical to your request.
id due_date update_date month
----------- ---------- ----------- --------------------
200 04/14/2019 02/04/2014 2
200 04/14/2019 3
200 06/16/2020 04/06/2014 4
200 06/16/2020 5
200 08/18/2021 06/08/2014 6
200 08/18/2021 7
200 08/18/2021 8
200 08/18/2021 9
200 08/18/2021 10
200 08/18/2021 11
200 08/18/2021 12
300 06/13/2014 03/03/2014 3
300 06/13/2014 4
300 06/13/2014 5
300 09/19/2014 06/09/2014 6
300 09/19/2014 7
300 09/19/2014 8
300 12/22/2014 09/12/2014 9
300 12/22/2014 10
300 12/22/2014 11
300 12/22/2014 12
(21 row(s) affected)
Your turn, please. I can figure for the life of me why anyone would need this particular output. The resulting "month" column doesn't make sense to me because you have multiple entries per month and they're all on the same day. Please explain.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 11, 2015 at 4:30 pm
Hi I'm just trying to create a dataset that I can query against to perform calculation per month. Thank you!
January 11, 2015 at 9:41 pm
marybeth1975 (1/11/2015)
Hi I'm just trying to create a dataset that I can query against to perform calculation per month. Thank you!
Exactly why I don't understand the desired output you provided.
Anyway, are you all set now? Was that meant to be the desired output or, based on what you just said, was that not the desired output and you need something else?
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 4 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply