Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

a bit of a puzzle, can you fine people assist me? Expand / Collapse
Author
Message
Posted Tuesday, July 02, 2013 9:05 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, December 17, 2013 11:14 AM
Points: 2, Visits: 9
I have two tables as defined below, and I need to return a list of jobs from table1 based on these conditions:
all the files associated with each job must have a success indicator in the most recent entry in the history table
if any of the associated files is not present or has a failed indicator, that job must be excluded.

Excluding all the failed files is easy, but how do I exclude a job when 2 out of 3 have succeeded?
the below data shows the mose recent entries for files 1, 2 3 are not all successful, so job1 should be excluded from the result set. I could of course loop through this and be done with it but I want to try to derive a result set instead of using a loop construct. Any thoughts?

CREATE TABLE1 (filename varchar(15), jobname varchar(15))

CREATE TABLE2 (Filename varchar(15), SuccessFailIndicator bit, checkdate datetime)

insert into table1
values('file1.txt', 'job1')

insert into table1
values('file2.txt', 'job1')

insert into table1
values('file3.txt', 'job1')

insert into table2
values('file1.txt', 1, '6/5/2013')

insert into table2
values('file2.txt', 0, '6/4/2013')

insert into table2
values('file3.txt', 1, '6/1/2013')

insert into table2
values('file1.txt', 0, '6/1/2013')

insert into table2
values('file2.txt', 0, '6/1/2013')

The design here is just to exemplify the logic, these are not the real data types or tables in use of course.
Post #1469588
Posted Tuesday, July 02, 2013 9:16 AM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Tuesday, April 08, 2014 6:13 AM
Points: 1,694, Visits: 19,550

Try this

WITH CTE1 AS (
SELECT Filename,SuccessFailIndicator,checkdate,
ROW_NUMBER() OVER(PARTITION BY Filename ORDER BY checkdate DESC) AS rn
FROM TABLE2)
SELECT t1.jobname
FROM TABLE1 t1
LEFT OUTER JOIN CTE1 c ON c.Filename = t1.Filename
AND c.SuccessFailIndicator = 1
AND c.rn=1
GROUP BY t1.jobname
HAVING COUNT(t1.Filename) = COUNT(c.Filename);



____________________________________________________

How to get the best help on a forum

http://www.sqlservercentral.com/articles/Best+Practices/61537

Never approach a goat from the front, a horse from the rear, or a fool from any direction.
Post #1469598
Posted Tuesday, July 02, 2013 9:37 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, December 17, 2013 11:14 AM
Points: 2, Visits: 9
This is exactly what I had except for the bit of genius at the end where you compared the counts...thank you very much kind internet stranger
Post #1469624
Posted Thursday, July 04, 2013 12:17 AM


Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Today @ 4:39 AM
Points: 3,594, Visits: 5,105
Here's possibly another way:

SELECT jobname
FROM TABLE1 a
CROSS APPLY (
SELECT TOP 1 SuccessFailIndicator=1-SuccessFailIndicator
FROM TABLE2 b
WHERE a.[Filename] = b.[Filename]
ORDER BY checkdate DESC
) b
GROUP BY a.jobname
HAVING SUM(b.SuccessFailIndicator) = 0;


Depending on your indexing, one could do better than the other.



My mantra: No loops! No CURSORs! No RBAR! Hoo-uh!

My thought question: Have you ever been told that your query runs too fast?

My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.


Need to UNPIVOT? Why not CROSS APPLY VALUES instead?
Since random numbers are too important to be left to chance, let's generate some!
Learn to understand recursive CTEs by example.
Splitting strings based on patterns can be fast!
Post #1470314
Posted Thursday, July 04, 2013 12:50 AM


Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Today @ 4:39 AM
Points: 3,594, Visits: 5,105
I'm always interested in cases like this so with more time on my hands than I care to admit, I came up with the following test harness:

CREATE TABLE #TABLE1 ([filename] varchar(15), jobname varchar(15)
,PRIMARY KEY (jobname, [filename]));
CREATE TABLE #TABLE2 ([filename] varchar(15), SuccessFailIndicator bit, checkdate datetime
,PRIMARY KEY ([filename], checkdate));
GO
insert into #table1 values('file1.txt', 'job1');
insert into #table1 values('file2.txt', 'job1');
insert into #table1 values('file3.txt', 'job1');

WITH Tally (n) AS (
SELECT TOP 10000 ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM sys.all_columns a CROSS JOIN sys.all_columns b)
INSERT INTO #TABLE1
SELECT 'f' + CAST(b.n AS VARCHAR(2)) + RIGHT('00000000' + CAST(a.n AS VARCHAR(9)), 9) + '.txt'
,'j' + RIGHT('00000000' + CAST(a.n AS VARCHAR(9)), 9)
FROM Tally a
CROSS APPLY (SELECT TOP (1+ABS(CHECKSUM(NEWID()))%9) n FROM Tally) b;

insert into #TABLE2 values('file1.txt', 1, '6/5/2013');
insert into #TABLE2 values('file2.txt', 0, '6/4/2013');
insert into #TABLE2 values('file3.txt', 1, '6/1/2013');
insert into #TABLE2 values('file1.txt', 0, '6/1/2013');
insert into #TABLE2 values('file2.txt', 0, '6/1/2013');

WITH Tally (n) AS (
SELECT TOP 10 ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM sys.all_columns a)
INSERT INTO #TABLE2
SELECT [filename], ABS(CHECKSUM(NEWID()))%2
,DATEADD(day, -(n-1), '6/5/2013')
FROM #TABLE1 a
CROSS APPLY (SELECT TOP (1+ABS(CHECKSUM(NEWID()))%8) n FROM Tally) b
WHERE [filename] NOT IN ('file1.txt','file2.txt','file3.txt');

--SELECT * FROM #TABLE1;
--SELECT * FROM #TABLE2;

DECLARE @jobname VARCHAR(15);

DBCC FREEPROCCACHE;
PRINT 'MARK';
SET STATISTICS TIME ON;
WITH CTE1 AS (
SELECT Filename,SuccessFailIndicator,checkdate,
ROW_NUMBER() OVER(PARTITION BY Filename ORDER BY checkdate DESC) AS rn
FROM #TABLE2)
SELECT @jobname=t1.jobname
FROM #TABLE1 t1
LEFT OUTER JOIN CTE1 c ON c.Filename = t1.Filename
AND c.SuccessFailIndicator = 1
AND c.rn=1
GROUP BY t1.jobname
HAVING COUNT(t1.Filename) = COUNT(c.Filename)
ORDER BY t1.jobname;
SET STATISTICS TIME OFF;

DBCC FREEPROCCACHE;
PRINT 'DWAIN';
SET STATISTICS TIME ON;
SELECT @jobname=jobname
FROM #TABLE1 a
CROSS APPLY (
SELECT TOP 1 SuccessFailIndicator=1-SuccessFailIndicator
FROM #TABLE2 b
WHERE a.[Filename] = b.[Filename]
ORDER BY checkdate DESC
) b
GROUP BY a.jobname
HAVING SUM(b.SuccessFailIndicator) = 0
ORDER BY a.jobname;
SET STATISTICS TIME OFF;

GO
DROP TABLE #TABLE1;
DROP TABLE #TABLE2;


Which yielded these results:

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

MARK
SQL Server Execution Times:
CPU time = 4352 ms, elapsed time = 2032 ms.
Warning: Null value is eliminated by an aggregate or other SET operation.

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

DWAIN
SQL Server Execution Times:
CPU time = 1763 ms, elapsed time = 450 ms.



Note that parallelism played a role in both queries because CPU > Elapsed time.



My mantra: No loops! No CURSORs! No RBAR! Hoo-uh!

My thought question: Have you ever been told that your query runs too fast?

My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.


Need to UNPIVOT? Why not CROSS APPLY VALUES instead?
Since random numbers are too important to be left to chance, let's generate some!
Learn to understand recursive CTEs by example.
Splitting strings based on patterns can be fast!
Post #1470322
Posted Thursday, July 04, 2013 2:19 AM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Tuesday, April 08, 2014 6:13 AM
Points: 1,694, Visits: 19,550
dwain.c (7/4/2013)
I'm always interested in cases like this so with more time on my hands than I care to admit, I came up with the following test harness:

CREATE TABLE #TABLE1 ([filename] varchar(15), jobname varchar(15)
,PRIMARY KEY (jobname, [filename]));
CREATE TABLE #TABLE2 ([filename] varchar(15), SuccessFailIndicator bit, checkdate datetime
,PRIMARY KEY ([filename], checkdate));
GO
insert into #table1 values('file1.txt', 'job1');
insert into #table1 values('file2.txt', 'job1');
insert into #table1 values('file3.txt', 'job1');

WITH Tally (n) AS (
SELECT TOP 10000 ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM sys.all_columns a CROSS JOIN sys.all_columns b)
INSERT INTO #TABLE1
SELECT 'f' + CAST(b.n AS VARCHAR(2)) + RIGHT('00000000' + CAST(a.n AS VARCHAR(9)), 9) + '.txt'
,'j' + RIGHT('00000000' + CAST(a.n AS VARCHAR(9)), 9)
FROM Tally a
CROSS APPLY (SELECT TOP (1+ABS(CHECKSUM(NEWID()))%9) n FROM Tally) b;

insert into #TABLE2 values('file1.txt', 1, '6/5/2013');
insert into #TABLE2 values('file2.txt', 0, '6/4/2013');
insert into #TABLE2 values('file3.txt', 1, '6/1/2013');
insert into #TABLE2 values('file1.txt', 0, '6/1/2013');
insert into #TABLE2 values('file2.txt', 0, '6/1/2013');

WITH Tally (n) AS (
SELECT TOP 10 ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM sys.all_columns a)
INSERT INTO #TABLE2
SELECT [filename], ABS(CHECKSUM(NEWID()))%2
,DATEADD(day, -(n-1), '6/5/2013')
FROM #TABLE1 a
CROSS APPLY (SELECT TOP (1+ABS(CHECKSUM(NEWID()))%8) n FROM Tally) b
WHERE [filename] NOT IN ('file1.txt','file2.txt','file3.txt');

--SELECT * FROM #TABLE1;
--SELECT * FROM #TABLE2;

DECLARE @jobname VARCHAR(15);

DBCC FREEPROCCACHE;
PRINT 'MARK';
SET STATISTICS TIME ON;
WITH CTE1 AS (
SELECT Filename,SuccessFailIndicator,checkdate,
ROW_NUMBER() OVER(PARTITION BY Filename ORDER BY checkdate DESC) AS rn
FROM #TABLE2)
SELECT @jobname=t1.jobname
FROM #TABLE1 t1
LEFT OUTER JOIN CTE1 c ON c.Filename = t1.Filename
AND c.SuccessFailIndicator = 1
AND c.rn=1
GROUP BY t1.jobname
HAVING COUNT(t1.Filename) = COUNT(c.Filename)
ORDER BY t1.jobname;
SET STATISTICS TIME OFF;

DBCC FREEPROCCACHE;
PRINT 'DWAIN';
SET STATISTICS TIME ON;
SELECT @jobname=jobname
FROM #TABLE1 a
CROSS APPLY (
SELECT TOP 1 SuccessFailIndicator=1-SuccessFailIndicator
FROM #TABLE2 b
WHERE a.[Filename] = b.[Filename]
ORDER BY checkdate DESC
) b
GROUP BY a.jobname
HAVING SUM(b.SuccessFailIndicator) = 0
ORDER BY a.jobname;
SET STATISTICS TIME OFF;

GO
DROP TABLE #TABLE1;
DROP TABLE #TABLE2;


Which yielded these results:

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

MARK
SQL Server Execution Times:
CPU time = 4352 ms, elapsed time = 2032 ms.
Warning: Null value is eliminated by an aggregate or other SET operation.

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

DWAIN
SQL Server Execution Times:
CPU time = 1763 ms, elapsed time = 450 ms.



Note that parallelism played a role in both queries because CPU > Elapsed time.



Interesting! However, the two queries don't give the same results. If you run this

delete from TABLE2 where Filename<>'file3.txt'

on the OPs sample data so that job1 only has one out of three successfully completed files, your query still reports the job as successful




____________________________________________________

How to get the best help on a forum

http://www.sqlservercentral.com/articles/Best+Practices/61537

Never approach a goat from the front, a horse from the rear, or a fool from any direction.
Post #1470338
Posted Thursday, July 04, 2013 6:35 AM


Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Today @ 4:39 AM
Points: 3,594, Visits: 5,105
Mark-101232 (7/4/2013)

Interesting! However, the two queries don't give the same results. If you run this

delete from TABLE2 where Filename<>'file3.txt'

on the OPs sample data so that job1 only has one out of three successfully completed files, your query still reports the job as successful




Degenerative case huh? Sorry I missed that in the requirements. Minor fix though I think will get it:

SELECT jobname
FROM #TABLE1 a
OUTER APPLY (
SELECT TOP 1 SuccessFailIndicator=1-SuccessFailIndicator
FROM #TABLE2 b
WHERE a.[Filename] = b.[Filename]
ORDER BY checkdate DESC
) b
GROUP BY a.jobname
HAVING SUM(ISNULL(b.SuccessFailIndicator, 1)) = 0;


Test harness is now:

CREATE TABLE #TABLE1 ([filename] varchar(15), jobname varchar(15)
,PRIMARY KEY (jobname, [filename]));
CREATE TABLE #TABLE2 ([filename] varchar(15), SuccessFailIndicator bit, checkdate datetime
,PRIMARY KEY ([filename], checkdate));
GO
insert into #table1 values('file1.txt', 'job1');
insert into #table1 values('file2.txt', 'job1');
insert into #table1 values('file3.txt', 'job1');

WITH Tally (n) AS (
SELECT TOP 50000 ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM sys.all_columns a CROSS JOIN sys.all_columns b)
INSERT INTO #TABLE1
SELECT 'f' + CAST(b.n AS VARCHAR(2)) + RIGHT('00000000' + CAST(a.n AS VARCHAR(9)), 9) + '.txt'
,'j' + RIGHT('00000000' + CAST(a.n AS VARCHAR(9)), 9)
FROM Tally a
CROSS APPLY (SELECT TOP (1+ABS(CHECKSUM(NEWID()))%9) n FROM Tally) b;

insert into #TABLE2 values('file1.txt', 1, '6/5/2013');
insert into #TABLE2 values('file2.txt', 0, '6/4/2013');
insert into #TABLE2 values('file3.txt', 1, '6/1/2013');
insert into #TABLE2 values('file1.txt', 0, '6/1/2013');
insert into #TABLE2 values('file2.txt', 0, '6/1/2013');

WITH Tally (n) AS (
SELECT TOP 10 ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM sys.all_columns a)
INSERT INTO #TABLE2
SELECT [filename], ABS(CHECKSUM(NEWID()))%2
,DATEADD(day, -(n-1), '6/5/2013')
FROM #TABLE1 a
CROSS APPLY (SELECT TOP (1+ABS(CHECKSUM(NEWID()))%8) n FROM Tally) b
WHERE [filename] NOT IN ('file1.txt','file2.txt','file3.txt');

--SELECT * FROM #TABLE1;
--SELECT * FROM #TABLE2;

DECLARE @jobname VARCHAR(15);

DBCC FREEPROCCACHE;
PRINT 'MARK';
SET STATISTICS TIME ON;
WITH CTE1 AS (
SELECT Filename,SuccessFailIndicator,checkdate,
ROW_NUMBER() OVER(PARTITION BY Filename ORDER BY checkdate DESC) AS rn
FROM #TABLE2)
SELECT @jobname=t1.jobname
--SELECT t1.jobname
FROM #TABLE1 t1
LEFT OUTER JOIN CTE1 c ON c.Filename = t1.Filename
AND c.SuccessFailIndicator = 1
AND c.rn=1
GROUP BY t1.jobname
HAVING COUNT(t1.Filename) = COUNT(c.Filename);
SET STATISTICS TIME OFF;

DBCC FREEPROCCACHE;
PRINT 'DWAIN';
SET STATISTICS TIME ON;
--SELECT jobname
SELECT @jobname=jobname
FROM #TABLE1 a
OUTER APPLY (
SELECT TOP 1 SuccessFailIndicator=1-SuccessFailIndicator
FROM #TABLE2 b
WHERE a.[Filename] = b.[Filename]
ORDER BY checkdate DESC
) b
GROUP BY a.jobname
HAVING SUM(ISNULL(b.SuccessFailIndicator, 1)) = 0;
SET STATISTICS TIME OFF;

GO
DROP TABLE #TABLE1;
DROP TABLE #TABLE2;


The results aren't much different than before.

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

MARK
SQL Server Execution Times:
CPU time = 13993 ms, elapsed time = 6558 ms.
Warning: Null value is eliminated by an aggregate or other SET operation.

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

DWAIN
SQL Server Execution Times:
CPU time = 7831 ms, elapsed time = 2105 ms.



The reason (at least by the execution plan on my machine) why the SELECT TOP 1 within APPLY is faster on my machine is because it is able to do a clustered index seek on TABLE2 (scan on TABLE1) while the JOIN does two clustered index scans.

Edit Corrected (removed) an ORDER BY I left in to check the results and bumped up the size of the test harness a bit.



My mantra: No loops! No CURSORs! No RBAR! Hoo-uh!

My thought question: Have you ever been told that your query runs too fast?

My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.


Need to UNPIVOT? Why not CROSS APPLY VALUES instead?
Since random numbers are too important to be left to chance, let's generate some!
Learn to understand recursive CTEs by example.
Splitting strings based on patterns can be fast!
Post #1470426
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse