Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««1234»»

A genuine use for a SQL CLR Aggregate Expand / Collapse
Author
Message
Posted Monday, January 3, 2011 9:12 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Wednesday, July 30, 2014 7:38 AM
Points: 377, Visits: 432
CELKO (12/29/2010)
rdw-769613 (12/29/2010)
Another good example to use CLR is GMT/Local/Unix time handling. Trying this in T-SQL is near imposible and within CLR it's simple and faster.


I always did this by mimicking the ANSI/ISO Standards system tables for the WITH TIMEZONE option in the DDL. Then everything is done with VIEWs. No need for a CLR.

Joe, do you have an article on this that you could link, or is this in one of your books? I've had my own issues with this over time and would like to see your implementation.


Beer's Law: Absolutum obsoletum
"if it works it's out-of-date"
Post #1041896
Posted Tuesday, January 4, 2011 4:58 AM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Tuesday, September 9, 2014 1:49 AM
Points: 1,694, Visits: 19,551
Solomon Rutzky (12/29/2010)
Adam Machanic (12/29/2010)
Have you compared the performance of doing a median calculation w/compression against performing the same calculation using Joe Celko's ROW_NUMBER technique?

http://sqlblog.com/blogs/adam_machanic/archive/2006/12/18/medians-row-numbers-and-performance.aspx

Compression is far from cheap, and I suspect that the ROW_NUMBER solution will scale much better. But that's just a suspicion and I don't have time to test myself at the moment.


Hi Adam. Thanks for mentioning that article as I had not seen it. I did just test this against the AdventureWorks2008 DB using SQL Server 2008 SP2 using the Celko method, Ben-Gan method, and my CLR UDA. The CLR method has the same number of reads as the Celko method but has slightly higher CPU and Elapsed times as compared to both Celko and Ben-Gan methods.

So yes, the pure T-SQL methods do appear to be slightly more efficient. However, for the record I did not say that compression was cheap. In fact, I did say that it would be slightly less efficient to do that but that it might prove to be a worthwhile trade-off depending on the circumstances.

My testing is as follows (the first two queries are copied directly from your blog that you noted above):

------------------------------------------------------------------------
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
SET STATISTICS IO ON
SET STATISTICS TIME ON

SELECT
CustomerId,
AVG(TotalDue)
FROM
(
SELECT
CustomerId,
TotalDue,
ROW_NUMBER() OVER (
PARTITION BY CustomerId
ORDER BY TotalDue ASC, SalesOrderId ASC) AS RowAsc,
ROW_NUMBER() OVER (
PARTITION BY CustomerId
ORDER BY TotalDue DESC, SalesOrderId DESC) AS RowDesc
FROM Sales.SalesOrderHeader SOH
) x
WHERE
RowAsc IN (RowDesc, RowDesc - 1, RowDesc + 1)
GROUP BY CustomerId
ORDER BY CustomerId;

SET STATISTICS IO OFF
SET STATISTICS TIME OFF

-- logical reads 686
-- CPU time = 47 ms, elapsed time = 1087 ms. (when DROPCLEANBUFFERS and FREEPROCCACHE are run)
-- CPU time = 62 ms, elapsed time = 235 ms. (when cached)
------------------------------------------------------------------------
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
SET STATISTICS IO ON
SET STATISTICS TIME ON

SELECT
CustomerId,
AVG(TotalDue)
FROM
(
SELECT
CustomerId,
TotalDue,
ROW_NUMBER() OVER (
PARTITION BY CustomerId
ORDER BY TotalDue) AS RowNum,
COUNT(*) OVER (
PARTITION BY CustomerId) AS RowCnt
FROM Sales.SalesOrderHeader
) x
WHERE
RowNum IN ((RowCnt + 1) / 2, (RowCnt + 2) / 2)
GROUP BY CustomerId
ORDER BY CustomerId;
SET STATISTICS IO OFF
SET STATISTICS TIME OFF

-- logical reads 686 + 139407 for Worktable = 140093 total
-- CPU time = 344 ms, elapsed time = 1085 ms. (when DROPCLEANBUFFERS and FREEPROCCACHE are run)
-- CPU time = 328 ms, elapsed time = 374 ms. (when cached)
------------------------------------------------------------------------
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
SET STATISTICS IO ON
SET STATISTICS TIME ON

SELECT CustomerId,
SQL#.SQL#.Agg_Median(TotalDue)
FROM Sales.SalesOrderHeader SOH
GROUP BY CustomerId
ORDER BY CustomerId;

SET STATISTICS IO OFF
SET STATISTICS TIME OFF

-- logical reads 686
-- CPU time = 1812 ms, elapsed time = 2707 ms. (when DROPCLEANBUFFERS and FREEPROCCACHE are run)
-- CPU time = 1906 ms, elapsed time = 1948 ms. (when cached)
------------------------------------------------------------------------




I don't think the CELKO method works correctly in all circumstances, the problem occurs when there are duplicates and the ASC and DESC ROW_NUMBERs don't match up. The data below shows the problem, the result should be 15, the Ben-Gan is correct here.

INSERT INTO SalesOrderHeader(CustomerId, TotalDue, SalesOrderId)
SELECT 100,10,10 UNION ALL
SELECT 100,10,10 UNION ALL
SELECT 100,20,10 UNION ALL
SELECT 100,30,10


____________________________________________________

How to get the best help on a forum

http://www.sqlservercentral.com/articles/Best+Practices/61537

Never approach a goat from the front, a horse from the rear, or a fool from any direction.
Post #1042303
Posted Tuesday, January 4, 2011 9:55 AM


Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Today @ 1:42 PM
Points: 368, Visits: 1,939
Mark-101232 (1/4/2011)
I don't think the CELKO method works correctly in all circumstances, the problem occurs when there are duplicates and the ASC and DESC ROW_NUMBERs don't match up. The data below shows the problem, the result should be 15, the Ben-Gan is correct here.

INSERT INTO SalesOrderHeader(CustomerId, TotalDue, SalesOrderId)
SELECT 100,10,10 UNION ALL
SELECT 100,10,10 UNION ALL
SELECT 100,20,10 UNION ALL
SELECT 100,30,10


Great catch, Mark!! I tested again with your data and my code (posted above), this time into a new table of just this data into a new DB as well:


USE [Test]
GO
CREATE TABLE dbo.SalesOrderHeader (CustomerId INT, TotalDue INT, SalesOrderId INT)

INSERT INTO dbo.SalesOrderHeader(CustomerId, TotalDue, SalesOrderId)
SELECT 100,10,10 UNION ALL
SELECT 100,10,10 UNION ALL
SELECT 100,20,10 UNION ALL
SELECT 100,30,10

And then of course change the "Sales" Schema references to "dbo" in the previously noted sample code.

It should be noted that the CLR (via SQL#) Median Aggregate (UDA) also worked correctly. And this is one reason why I definitely like CLR for things like this since it encapsulates complex logic. The harder code is to read the more likely it will get messed up by someone either in the beginning or when making changes. The CLR Aggregate gives a very simple and accurate way of doing an otherwise complex task that leaves the SQL developer to worry about the rest of the query without the complicating factor of what was needed to get the Median working. So while it is slightly less efficient, I do see definite benefits to it .





SQL# - http://www.SQLsharp.com/
Post #1042548
Posted Tuesday, January 4, 2011 10:05 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: 2 days ago @ 8:37 AM
Points: 1,140, Visits: 703
Mark-101232 (1/4/2011)
I don't think the CELKO method works correctly in all circumstances, the problem occurs when there are duplicates and the ASC and DESC ROW_NUMBERs don't match up.


Garbage in, garbage out. If you pass in nonsensical data that lacks a key, you'll get back a nonsense answer. Break the ties and everything works fine.



--
Adam Machanic
SQL Server MVP
SQLblog.com: THE SQL Server Blog Spot on the Web
Post #1042560
Posted Tuesday, January 4, 2011 10:26 AM


Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Today @ 1:42 PM
Points: 368, Visits: 1,939
Adam Machanic (1/4/2011)
Garbage in, garbage out. If you pass in nonsensical data that lacks a key, you'll get back a nonsense answer. Break the ties and everything works fine.


D'oh! I just retested with unique SalesOrderId values and the Celko method did work correctly.

I still do like the simplicity of the UDA, though.





SQL# - http://www.SQLsharp.com/
Post #1042577
Posted Friday, June 28, 2013 12:16 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Monday, September 8, 2014 5:29 PM
Points: 13, Visits: 196
I found your code useful.

http://databasebuddha.blogspot.com/2013/06/better-linear-regression-forecasting.html
Post #1468669
Posted Wednesday, February 5, 2014 7:07 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Thursday, February 6, 2014 6:07 AM
Points: 2, Visits: 14
The code examples are not correctly formatted in some cases at least on Chrome. Need to tweak those in the post.

I would gently and humbly offer that the CLR provides a bit better testability potential? SQL is notoriously hard to write unit tests for. However, I may be completely off base.
Post #1538175
Posted Wednesday, February 5, 2014 7:53 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Thursday, June 12, 2014 2:56 PM
Points: 1,386, Visits: 256
From my point of view, the more SQL aggregates, the better. Most of SQL computations I do involve things like Rolling Averages, Rolling Standard Deviations, Rolling Minimums and Rolling Maximums.

Part of the reason we are going from SQL 2005 to SQL 2012 is the ability to compute the Rolling Percentiles.
Post #1538190
Posted Wednesday, February 5, 2014 7:57 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: 2 days ago @ 8:37 AM
Points: 1,140, Visits: 703
For those of you who are interested in this topic, here's a link to a presentation I did at last year's TechEd show:

http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/DBI-B404

This presentation discusses a specific CLR aggregate-based technique that -- at least in my humble opinion -- completely opens the door with regard to functionality.

Enjoy


--
Adam Machanic
SQL Server MVP
SQLblog.com: THE SQL Server Blog Spot on the Web
Post #1538193
Posted Wednesday, February 5, 2014 8:24 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Thursday, June 12, 2014 2:56 PM
Points: 1,386, Visits: 256
Here is a problem that SQL Aggregates are good at solving:

I work at a Waste Water Treatment plant and have 30,000 tags/data points that I am monitoring. These tags tell me things like how fast the sludge is flowing, how fast the pumps are moving, tank levels ect…

The problem I have is that when a tag gets dirty, it can send me the same value over and over again.

The solution is to get an alert when the Rolling Standard Deviation approaches zero.
Post #1538214
« Prev Topic | Next Topic »

Add to briefcase «««1234»»

Permissions Expand / Collapse