March 31, 2017 at 3:39 pm
I am working on a for insert trigger that supplies values to a stored procedure. There are several clocks in the company used by users to punch in/out, queued by rabbit MQ which are sent into database by real time service. The table on the time database contains the emp_num and log in/out time. I wrote a trigger on this table to send the records into different system. This is done by picking the values from inserted tables, storing in variables and passing them to stored procedures. My first error was with TRANSACTION ISOLATION LEVEL SNAPSHOT. With in the procedure the isolation level is set to snapshot and was throwing the error transaction isolation error cannot be changed once the transaction has begun. Then I tried to commit the transaction with in the trigger and then call the procedure. Records were processed by procedure but got an error "command has been aborted in the trigger". Also the external service which is pushing the record to database thinks that transaction failed and trying to send the same record again and again which caused duplicated records inserted. To avoid this problem I am checking if the record already inserted into base table and return from the trigger. It worked fine with my testing with single punch in/outs and when we tested in real time it failed. Logs show that a dead lock was created several times and took several hours for the record to reach database from timeclock. Also single punch in/out created multiple records in time database table with different time stamps. My major issue is the delay and duplicate records for single punch in/out. Below is the trigger code. Is there any other way that I can by pass isolation level or the exception for committing the transaction midway in the trigger.
ALTER TRIGGER [dbo].[TimeLogInsert_AR1]
ON [dbo].[USER_TIME_LOG]
FOR INSERT
AS
BEGIN
DECLARE @TRIGGER_ROW_COUNT INT
SET @TRIGGER_ROW_COUNT = @@ROWCOUNT
IF @TRIGGER_ROW_COUNT = 0 RETURN
-- filter out if the record inserted first time.
IF (SELECT COUNT(*)
FROM USER_TIME_LOG TL
JOIN inserted bt ON TL.USER_ID = bt.USER_ID
AND TL.LOG_TIME = bt.LOG_TIME
AND TL.LOG_TIME_ORIGINAL = bt.LOG_TIME_ORIGINAL) > 1
RETURN
DECLARE @EMP_NUM NVARCHAR(10)
, @LOG_DATE DATETIME
, @LOG_TIME NVARCHAR(10)
, @Device_SR_NO NVARCHAR(40)
, @TDATE NVARCHAR(10)
, @TTIME NVARCHAR(10)
, @RETURN_VALUE INT
, @INFOBAR NVARCHAR(50)
IF @TRIGGER_ROW_COUNT > 0
BEGIN
SELECT @EMP_NUM = bt.USER_ID
, @LOG_DATE = CONVERT(DATE,bt.LOG_TIME)
, @LOG_TIME = REPLACE(CONVERT(VARCHAR(5),bt.LOG_TIME, 108), ':', '')
, @Device_SR_NO = RIGHT( bt.DEVICE_SERIAL_NO, 3)
, @TDATE = CONVERT(NVARCHAR(10), bt.LOG_TIME, 112)
, @TTIME = REPLACE(CONVERT(VARCHAR(8),bt.LOG_TIME, 108), ':', '')
FROM inserted bt
END
IF @@TRANCOUNT > 0
COMMIT TRAN -- commited here to over come the isolatin level problem
EXEC @RETURN_VALUE = Proc1
@ClockInDate = @LOG_DATE
, @ClockInTime = @LOG_TIME
, @EmployeeNumber = @EMP_NUM
, @shift = NULL
, @ReductionCategory = 1
, @InfoBar = @INFOBAR OUTPUT
IF @RETURN_VALUE > 0 AND @INFOBAR <> 'ClockIn Successful '
EXEC @RETURN_VALUE = Proc2
@ClockOutDate = @LOG_DATE
, @ClockOutTime = @LOG_TIME
, @EmployeeNumber = @EMP_NUM
, @ReductionCategory = 1
, @InfoBar = @InfoBar OUTPUT
END
March 31, 2017 at 7:54 pm
Your existence check using SELECT COUNT(*) is horribly expensive and is the likely cause of many of your problems. Change it to the appropriate IF EXISTS() as a starting point.
I also don't know what Proc1 and Proc2 do but if they have similar performance and transaction problems as the trigger does, that would also be a source of deadlocks for you.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 1, 2017 at 8:31 am
Jeff Moden - Friday, March 31, 2017 7:54 PMYour existence check using SELECT COUNT(*) is horribly expensive and is the likely cause of many of your problems. Change it to the appropriate IF EXISTS() as a starting point.I also don't know what Proc1 and Proc2 do but if they have similar performance and transaction problems as the trigger does, that would also be a source of deadlocks for you.
Thanks Jeff.... is there way I can bypass the transaction error? I am not allowed to make any changes to the procs as they are from ERP system bought by our company.
April 1, 2017 at 8:12 pm
arun@D - Saturday, April 1, 2017 8:31 AMJeff Moden - Friday, March 31, 2017 7:54 PMYour existence check using SELECT COUNT(*) is horribly expensive and is the likely cause of many of your problems. Change it to the appropriate IF EXISTS() as a starting point.I also don't know what Proc1 and Proc2 do but if they have similar performance and transaction problems as the trigger does, that would also be a source of deadlocks for you.
Thanks Jeff.... is there way I can bypass the transaction error? I am not allowed to make any changes to the procs as they are from ERP system bought by our company.
Absolutely. Don't do everything in the trigger. If you're relegated to using their procs, use the trigger to simply populate a permanent staging table That will make it so that people can do their inserts a whole lot faster. Then, have a "sweeper" proc that runs on a regular basis (or continuously, if you need it to) that reads from the "beginning" of the staging table to do what you need to do while the "end" of the staging table is being inserted to by the trigger. Once you've processed the row or rows you need to, delete them from the staging table or mark them as complete.
Of course, I'd still did into their procs and find out why they think they need a RBAR solution for this. I'd also check out any other performance/deadlock issues and make them fix the problems.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 2, 2017 at 3:10 am
I'd follow Jeff's solution overall, but a few things I've done when working with a table as a queue. Note, these may or may not apply for your situation. Hard to tell from the description,
Add a column to the table for the SPID - Populate it with the process SPID when you start working, then work with those rows. This allows scale out, and if you happen to get two processes (unlikely) that go try to read the same rows, one will know that xx rows are being processed. If the process dies, you can reset the SPID column to null for those rows and the process keeps picking them up.
Use date start/end times to track what's happening, might want to mark things are processed and then roll them off after some xx time. I have found it's nice to be able to look back a few periods (days, hours, etc. )and see what has been happening.
As Jeff mentioned, the trigger logic should be really, really small. You can have the trigger just insert and a proc that runs every minute to process the rows. I'd put most logic in a proc or two as this makes things more testable.
Speaking of tests. I'd really use testing frameworks to help here. MSTest, tSQLt, tSQLUnit, something. Set up a series of cases that show what you expect, then run your procs as a part of the test to see if things work. More importantly, as you tweak, you have a set of tests designed to let you know if you've forgotten various cases of data/dates/times/transactions/etc.
If you want these fairly real time, and don't want an Agent (or other scheduler ) job, then consider re-using Rabbit or set up Service Broken. Let the trigger drop a message in there, and use a proc to execute when a new message is received. If you do this, make sure your admins do DR testing with the queue in the db. There are a few things to learn here about restoring the dbs.
April 4, 2017 at 3:49 pm
Agree. something like below. Keep the trigger code as efficient as possible. (Btw, cluster the new table on LOG_TIME first, to make processing and clean up of that table easier.)
Also, never attempt to commit a transaction inside a trigger, that just doesn't work in SQL Server.
ALTER TRIGGER [dbo].[TimeLogInsert_AR1]
ON [dbo].[USER_TIME_LOG]
AFTER INSERT
AS
SET NOCOUNT ON;
INSERT INTO dbo.USER_TIME_LOG_NEW_INSERTS (
USER_ID,
LOG_TIME,
DEVICE_SERIAL_NO,
has_been_processed
)
SELECT
bt.USER_ID,
bt.LOG_TIME,
bt.DEVICE_SERIAL_NO,
0 AS has_been_processed
FROM inserted bt
GO
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
April 4, 2017 at 3:55 pm
arun@D - Friday, March 31, 2017 3:39 PMI am working on a for insert trigger that supplies values to a stored procedure. There are several clocks in the company used by users to punch in/out, queued by rabbit MQ which are sent into database by real time service. The table on the time database contains the emp_num and log in/out time. I wrote a trigger on this table to send the records into different system. This is done by picking the values from inserted tables, storing in variables and passing them to stored procedures. My first error was with TRANSACTION ISOLATION LEVEL SNAPSHOT. With in the procedure the isolation level is set to snapshot and was throwing the error transaction isolation error cannot be changed once the transaction has begun. Then I tried to commit the transaction with in the trigger and then call the procedure. Records were processed by procedure but got an error "command has been aborted in the trigger". Also the external service which is pushing the record to database thinks that transaction failed and trying to send the same record again and again which caused duplicated records inserted. To avoid this problem I am checking if the record already inserted into base table and return from the trigger. It worked fine with my testing with single punch in/outs and when we tested in real time it failed. Logs show that a dead lock was created several times and took several hours for the record to reach database from timeclock. Also single punch in/out created multiple records in time database table with different time stamps. My major issue is the delay and duplicate records for single punch in/out. Below is the trigger code. Is there any other way that I can by pass isolation level or the exception for committing the transaction midway in the trigger.ALTER TRIGGER [dbo].[TimeLogInsert_AR1]
ON [dbo].[USER_TIME_LOG]
FOR INSERT
AS
BEGIN
DECLARE @TRIGGER_ROW_COUNT INT
SET @TRIGGER_ROW_COUNT = @@ROWCOUNT
IF @TRIGGER_ROW_COUNT = 0 RETURN
-- filter out if the record inserted first time.
IF (SELECT COUNT(*)
FROM USER_TIME_LOG TL
JOIN inserted bt ON TL.USER_ID = bt.USER_ID
AND TL.LOG_TIME = bt.LOG_TIME
AND TL.LOG_TIME_ORIGINAL = bt.LOG_TIME_ORIGINAL) > 1
RETURNDECLARE @EMP_NUM NVARCHAR(10)
, @LOG_DATE DATETIME
, @LOG_TIME NVARCHAR(10)
, @Device_SR_NO NVARCHAR(40)
, @TDATE NVARCHAR(10)
, @TTIME NVARCHAR(10)
, @RETURN_VALUE INT
, @INFOBAR NVARCHAR(50)IF @TRIGGER_ROW_COUNT > 0
BEGIN
SELECT @EMP_NUM = bt.USER_ID
, @LOG_DATE = CONVERT(DATE,bt.LOG_TIME)
, @LOG_TIME = REPLACE(CONVERT(VARCHAR(5),bt.LOG_TIME, 108), ':', '')
, @Device_SR_NO = RIGHT( bt.DEVICE_SERIAL_NO, 3)
, @TDATE = CONVERT(NVARCHAR(10), bt.LOG_TIME, 112)
, @TTIME = REPLACE(CONVERT(VARCHAR(8),bt.LOG_TIME, 108), ':', '')
FROM inserted bt
END
IF @@TRANCOUNT > 0
COMMIT TRAN -- commited here to over come the isolatin level problemEXEC @RETURN_VALUE = Proc1
@ClockInDate = @LOG_DATE
, @ClockInTime = @LOG_TIME
, @EmployeeNumber = @EMP_NUM
, @shift = NULL
, @ReductionCategory = 1
, @InfoBar = @INFOBAR OUTPUT
IF @RETURN_VALUE > 0 AND @INFOBAR <> 'ClockIn Successful '
EXEC @RETURN_VALUE = Proc2
@ClockOutDate = @LOG_DATE
, @ClockOutTime = @LOG_TIME
, @EmployeeNumber = @EMP_NUM
, @ReductionCategory = 1
, @InfoBar = @InfoBar OUTPUTEND
This is not gonna work within a trigger IF @@TRANCOUNT > 0
COMMIT TRAN -- commited here to over come the isolatin level problem
Try adding PRINT @@TRANCOUNT before and after the statement and see what effect does it take.
If you want to commti the transaction here you need to end the trigger.
Follow Jeff's advice: instead of assigning the values to variables insert them into a staging table.
Then have a job which will take records from that table one by one and execute the procedures.
So, your trigger must look like this:
ALTER TRIGGER [dbo].[TimeLogInsert_AR1]
ON [dbo].[USER_TIME_LOG]
FOR INSERT
AS
INSERT INTO USER_TIME_LOG_NewRecords
(EMP_NUM ,LOG_TIME, Device_SR_NO)
SELECT bt.USER_ID, bt.LOG_TIME, bt.DEVICE_SERIAL_NO
FROM inserted bt
WHERE NOT EXISTS (
select * from USER_TIME_LOG TL
where TL.USER_ID = bt.USER_ID
AND TL.LOG_TIME = bt.LOG_TIME
AND TL.LOG_TIME_ORIGINAL = bt.LOG_TIME_ORIGINAL
)
AND NOT EXISTS (
select * from USER_TIME_LOG_NewRecords NR
where NR.USER_ID = bt.USER_ID
AND NR.LOG_TIME = bt.LOG_TIME
)
GO
That's it.
Faulty procedures must be kept out of the INSERT transaction.
_____________
Code for TallyGenerator
April 5, 2017 at 8:04 am
Thank you all for the help. I tried using the staging table method and came across different error. I am confused how to catch the errors and handle them. I should schedule the job every 2 minutes and it should be real time. Below is the procedure code. It is working fine if there are no errors in try block. When there is an error and transaction goes into catch block the scheduled job nerve ends and looping for ever. The same record is processed several times. Should I use begin and commit transaction? I dont understand why it is looping infinite as I am updating the status = failed?
ALTER PROCEDURE [dbo].[Load_TIMELogInfo_FT]
AS
BEGIN
DECLARE @EMP_NUM NVARCHAR(10)
, @DEVICE_SR_NO NVARCHAR(40)
, @LOG_TIME_ORIGINAL DATETIME
, @LOG_DATE DATETIME
, @LOG_TIME NVARCHAR(10)
, @TDATE NVARCHAR(10)
, @TTIME NVARCHAR(10)
, @RETURN_VALUE INT
, @INFOBAR NVARCHAR(50)
, @SPID INT
, @TIME_LOG_ID INT
--SET TRANSACTION ISOLATION LEVEL SNAPSHOT -- READ UNCOMMITTED
SELECT @SPID = @@SPID
UPDATE USER_TIME_LOG_STAGE
SET SPID = @SPID
WHERE SPID IS NULL
DECLARE TimeLogBatch_Cur CURSOR FORWARD_ONLY READ_ONLY FOR
SELECT USER_ID
, LOG_TIME
, TIME_LOG_ID
, DEVICE_SERIAL_NO
FROM USER_TIME_LOG_STAGE
WHERE SPID = @SPID
AND STATUS IS NULL
OPEN TimeLogBatch_Cur
FETCH NEXT FROM TimeLogBatch_Cur INTO @EMP_NUM, @LOG_TIME_ORIGINAL, @TIME_LOG_ID,@DEVICE_SR_NO
WHILE @@FETCH_STATUS = 0
BEGIN
BEGIN TRY
SELECT @LOG_DATE = CONVERT(DATE,@LOG_TIME_ORIGINAL)
, @LOG_TIME = REPLACE(CONVERT(VARCHAR(5),@LOG_TIME_ORIGINAL, 108), ':', '')
, @TDATE = CONVERT(NVARCHAR(10), @LOG_TIME_ORIGINAL, 112)
, @TTIME = REPLACE(CONVERT(VARCHAR(8),@LOG_TIME_ORIGINAL, 108), ':', '')
EXEC @RETURN_VALUE = [FT_Pilot5_App].[dbo].[TAClockInSp]
@ClockInDate = @LOG_DATE
, @ClockInTime = @LOG_TIME
, @EmployeeNumber = @EMP_NUM
, @shift = NULL -- Employee Shift
, @ReductionCategory = 1 -- specifies reduction category 1-Normal Clock IN, 2-Lunch IN, 3-Break IN
, @InfoBar = @INFOBAR OUTPUT
IF @RETURN_VALUE > 0 AND @INFOBAR <> 'ClockIn Successful '
EXEC @RETURN_VALUE = [FT_Pilot5_App].[dbo].[TAClockOutSp]
@ClockOutDate = @LOG_DATE
, @ClockOutTime = @LOG_TIME
, @EmployeeNumber = @EMP_NUM
, @ReductionCategory = 1 -- specifies reduction category 1-Normal Clock OUT, 2-Lunch OUT, 3-Break OUT
, @InfoBar = @INFOBAR OUTPUT
INSERT INTO rhsco.s10a5123.rhsmisc.idcpf
( CLOCK, SEQ, CONT, DST, TDATE, TTIME, STATUS, DEV1, DATA1 )
VALUES ( RIGHT( @Device_SR_NO, 3), 11, 0, 0, @TDATE, @TTIME, ' ', 0, @EMP_NUM)
UPDATE USER_TIME_LOG_STAGE
SET STATUS = 'Processed'
, INFOBAR = @INFOBAR
WHERE TIME_LOG_ID = @TIME_LOG_ID
AND USER_ID = @EMP_NUM
FETCH NEXT FROM TimeLogBatch_Cur INTO @EMP_NUM, @LOG_TIME_ORIGINAL, @TIME_LOG_ID,@DEVICE_SR_NO
END TRY
BEGIN CATCH
UPDATE USER_TIME_LOG_STAGE
SET STATUS = 'Failed'
, INFOBAR = @INFOBAR
WHERE TIME_LOG_ID = @TIME_LOG_ID
AND USER_ID = @EMP_NUM
END CATCH
END
CLOSE TimeLogBatch_Cur
DEALLOCATE TimeLogBatch_Cur
END
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy