Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12

Working with Queues in SQL Server Expand / Collapse
Author
Message
Posted Monday, April 05, 2010 10:32 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Tuesday, October 01, 2013 12:24 PM
Points: 39, Visits: 157
The solution simply illustrates in general what you can do to get around the problem. You can most certainly add ordering, user audits and resolutions to orphaned items. Then again, if you have a necessity to solve these issues in your application you might want to consider going with SQL Server Service Broker
Post #896846
Posted Monday, April 05, 2010 11:48 AM


SSC-Insane

SSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-Insane

Group: General Forum Members
Last Login: Today @ 7:19 AM
Points: 20,464, Visits: 14,093
Nice Article, and better discussion.



Jason AKA CirqueDeSQLeil
I have given a name to my pain...
MCM SQL Server


SQL RNNR

Posting Performance Based Questions - Gail Shaw
Posting Data Etiquette - Jeff Moden
Hidden RBAR - Jeff Moden
VLFs and the Tran Log - Kimberly Tripp
Post #896915
Posted Monday, April 05, 2010 3:38 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: Administrators
Last Login: Yesterday @ 11:24 AM
Points: 32,781, Visits: 14,942
We used to do something similar for sending email here on the site. we had multiple machines that sent, and needed to read a database table to get their list of "things to send".

Our method of handling this was to update the next xx rows with a client machine name. This effectively removed the rows from the Q. Once the update was complete, the client could query for rows that matched it's name, but had not been processed.

Upon the completion of processing, the client updated each row to show it as processed and then started over with another update of new rows in the Q







Follow me on Twitter: @way0utwest

Forum Etiquette: How to post data/code on a forum to get the best help
Post #897070
Posted Tuesday, April 06, 2010 12:49 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Yesterday @ 5:09 PM
Points: 11,168, Visits: 10,932
Steve Jones - Editor (4/5/2010)
Our method of handling this was to update the next xx rows with a client machine name. This effectively removed the rows from the Q. Once the update was complete, the client could query for rows that matched it's name, but had not been processed.

This is exactly the pattern used at two places I have worked, and the basis for the code posted earlier in the thread. Of course all that was before the days of Service Broker, but with careful coding and consideration for the concurrency and locking issues involved, it can be made to work very well.




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #897343
Posted Tuesday, April 06, 2010 10:00 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, February 14, 2014 11:45 AM
Points: 25, Visits: 165
In my view the code fragment from Ahmad is missing a number of considerations that should be taken into account for a practical SQL table-based queueing systems:

* Polling SQL table based queues, as opposed to Service Broker, scales poorly. If you have multiple queues, or you need to keep the queue latency down, then Service Broker (or probably MSMQ) is by far your better answer. Assume 20 queues, and you want to keep the average enqueue time to process time below 50 milliseconds, you may be issuing 200 individual SQL requests a second, incurring a substantial constant overhead

* In a practical environment you should probably only COMMIT the change to the queue entry once you have fully processed the entry. Your SQL 2008 code specifically precludes doing this. If you COMMIT before processing the queue entry you need some way of tracking orphaned entries, which would include updating the queue entry with process identification, date/time, etc before COMMIT and a separately executing process to detect of orphaned entries and requeue them. This also raises the whole question of how long you can afford an orphaned entry to be orphaned for before it is detected, potential duplicate processing reversed, and finally processed

* Probably the most vexed issue in queue processing is poison messages; messages that, for whatever reason, your code cannot handle. You cannot realistically keep the status of the queue entry in the queue entry itself because you may ROLLBACK rather than COMMIT and process the entry repeatedlly. In my view the simplest answer is two connections; one is transacted and handles the queue, one is untransacted and keeps track of the status of the queue entry (number of retries, etc). In retrieving a queue entry you should retrieve both the queue entry and its associated status entry, and only retrieve entries which have not been re-processed excessively

* The READPAST locking hint may skip messages not only if the row is locked (busy being INSERTed) but also if the index is busy. The nett effect is that you may retrieve no messages even though there are messages in the queue, or you may process messages out-of-sequence. An associated issue is that performance may degrade and your apparent queue become less stable as the number of handled queue messages increases. A better approach may be to DELETE the queue entry on processing completion rather than mark it as processed, and keep a separate history log

So, in summary. Simple, yes. Realistic, not close

Post #897674
Posted Thursday, April 08, 2010 3:28 PM


SSCrazy Eights

SSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy Eights

Group: General Forum Members
Last Login: Yesterday @ 8:37 AM
Points: 8,287, Visits: 8,738
Ahmad Alkilani (4/5/2010)
The solution simply illustrates in general what you can do to get around the problem. You can most certainly add ordering, user audits and resolutions to orphaned items. Then again, if you have a necessity to solve these issues in your application you might want to consider going with SQL Server Service Broker

And if you were going to consider that, it might be a good idea to read http://rusanu.com/2010/03/09/dealing-with-large-queues/ first, as well as worrying about using a message passing interface as your interface to your queus and the need to have a few auxiliary tables since you can't affect or adjust in any way the structure of the tables underlying SQL Service Broker "Queues".
Also, if you have a high queue insertion rate (as you will for example if you are handling the request queue[s] in a discrete event simulator, or in most simulation-based prediction generators) you are going to have to have a housekeeping task to do index reorganisation pretty frequently whether you use Service Broker Queues or roll your own and that is much easier to do if you roll your own.



Tom
Post #900035
Posted Thursday, April 08, 2010 3:40 PM


SSCrazy Eights

SSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy Eights

Group: General Forum Members
Last Login: Yesterday @ 8:37 AM
Points: 8,287, Visits: 8,738
tony.turner (4/6/2010)
In my view the code fragment from Ahmad is missing a number of considerations that should be taken into account for a practical SQL table-based queueing systems:
I agree that much is missing, but not with the idea that Service Broker is a good approach for high queue throughput (see previous comment) or with the idea that nontransactional access (it issn't clear whether you actually mean separate transactions in a paralell session or something else) to status is a good approach to defensive programming here.
So, in summary. Simple, yes. Realistic, not close


Yes, that was my feeling too.


Tom
Post #900044
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse