deadlock issue

  • Hi Experts,

    We are seeing repeated deadlocks on one of our prod servers. Can anyone please tell me why deadlock is occuring and what can be done to fix or atleast minimize these deadlocks. We are using sql 2016 Enterprise Edition.

    Attaching the deadlock graph. I tried to pull out deadlock information from system health session.

    Thanks,

    Sam

  • So, the deadlock graph didn't upload. You'll need to do a little more formatting to get it here and/or post it to a file share somewhere and post the link.

     

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • <deadlock>
    <victim-list>
    <victimProcess id="process237485b868" />
    </victim-list>
    <process-list>
    <process id="process237485b868" taskpriority="0" logused="0" waitresource="KEY: 34:72058363954855936 (e795f34fdfa7)" waittime="503" ownerId="9642762791" transactionname="implicit_transaction" lasttranstarted="2019-08-22T22:36:51.997" XDES="0x5c93fb6a8" lockMode="U" schedulerid="1" kpid="2388" status="suspended" spid="446" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2019-08-22T22:36:51.997" lastbatchcompleted="2019-08-22T22:36:51.870" lastattention="1900-01-01T00:00:00.870" clientapp="Microsoft JDBC Driver for SQL Server" hostname="sjc4mdmappp01.corp.service-now.com" hostpid="0" loginname="cmx_ors" isolationlevel="read committed (2)" xactid="9642762791" currentdb="34" lockTimeout="4294967295" clientoption1="671219744" clientoption2="128058">
    <executionStack>
    <frame procname="adhoc" line="1" stmtstart="310" sqlhandle="0x0200000004296002253d1e77f9e6736ee848e3316b90c0000000000000000000000000000000000000000000">
    UPDATE C_REPOS_APPLIED_LOCK SET LOCK_QUERY_SQL=@P0, LOCK_EXCLUSIVE_IND=@P1, JOB_TYPE_STR=@P2, MODULE_NAME=@P3, INTERACTION_ID=@P4, LAST_UPDATE_DATE=@P5, UPDATED_BY=@P6 WHERE ROWID_TABLE = @P7 AND LOCK_GROUP_STR=@P8 </frame>
    <frame procname="unknown" line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000">
    unknown </frame>
    </executionStack>
    <inputbuf>
    (@P0 nvarchar(4000),@P1 bit,@P2 nvarchar(4000),@P3 nvarchar(4000),@P4 varchar(8000),@P5 datetime2,@P6 nvarchar(4000),@P7 nvarchar(4000),@P8 nvarchar(4000))UPDATE C_REPOS_APPLIED_LOCK SET LOCK_QUERY_SQL=@P0, LOCK_EXCLUSIVE_IND=@P1, JOB_TYPE_STR=@P2, MODULE_NAME=@P3, INTERACTION_ID=@P4, LAST_UPDATE_DATE=@P5, UPDATED_BY=@P6 WHERE ROWID_TABLE = @P7 AND LOCK_GROUP_STR=@P8 </inputbuf>
    </process>
    <process id="process237f7e6928" taskpriority="0" logused="480" waitresource="KEY: 34:72058363954659328 (838cbf52070e)" waittime="508" ownerId="9642762806" transactionname="implicit_transaction" lasttranstarted="2019-08-22T22:36:52.003" XDES="0x456415d6a8" lockMode="X" schedulerid="3" kpid="25708" status="suspended" spid="438" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2019-08-22T22:36:52.003" lastbatchcompleted="2019-08-22T22:36:51.907" lastattention="1900-01-01T00:00:00.907" clientapp="Microsoft JDBC Driver for SQL Server" hostname="sjc4mdmappp01.corp.service-now.com" hostpid="0" loginname="cmx_ors" isolationlevel="read committed (2)" xactid="9642762806" currentdb="34" lockTimeout="4294967295" clientoption1="671219744" clientoption2="128058">
    <executionStack>
    <frame procname="adhoc" line="1" stmtstart="40" sqlhandle="0x020000002fb7d6119a0168bacfcfccca5c361149704f8e940000000000000000000000000000000000000000">
    DELETE FROM C_REPOS_APPLIED_LOCK WHERE LOCK_GROUP_STR=@P0 </frame>
    <frame procname="unknown" line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000">
    unknown </frame>
    </executionStack>
    <inputbuf>
    (@P0 nvarchar(4000))DELETE FROM C_REPOS_APPLIED_LOCK WHERE LOCK_GROUP_STR=@P0 </inputbuf>
    </process>
    </process-list>
    <resource-list>
    <keylock hobtid="72058363954855936" dbid="34" objectname="cmx_ors.dbo.C_REPOS_APPLIED_LOCK" indexname="PK_APPLIED_LOCK" id="lock10c7de7600" mode="X" associatedObjectId="72058363954855936">
    <owner-list>
    <owner id="process237f7e6928" mode="X" />
    </owner-list>
    <waiter-list>
    <waiter id="process237485b868" mode="U" requestType="wait" />
    </waiter-list>
    </keylock>
    <keylock hobtid="72058363954659328" dbid="34" objectname="cmx_ors.dbo.C_REPOS_APPLIED_LOCK" indexname="C_REPOS_ROWID_JOBTYPE_IND" id="lock4ee7479400" mode="U" associatedObjectId="72058363954659328">
    <owner-list>
    <owner id="process237485b868" mode="U" />
    </owner-list>
    <waiter-list>
    <waiter id="process237f7e6928" mode="X" requestType="wait" />
    </waiter-list>
    </keylock>
    </resource-list>
    </deadlock>
  • Looks like a pretty straight forward issue. You've got two different delete statements running from one process, looks like both reference the same lock_group_str and then the update is running, I'll bet on either the same value or one near by.

    Deadlocks are basically about performance. If all transactions clear, no one ever deadlocks. First, make sure these are properly indexed tables.Tune things if needed (although these seem like very straight forward queries, so it's either missing/bad indexes, or the problem is elsewhere).

    Second, deadlocks usually occur because of other statements in the procedures that are not immediately apparent in the deadlock graph. For example, is there another statement before the statements involved that could take a lock out, including a select statement? This is especially true when multiple tables are involved and the order of processing is different between tables. You may be seeing this too. So, look outside of just the immediate queries involved to see what else is happening.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Thanks Grant. Will check with the dev team. Thanks for the help.

  • Hi Grant,

    Need one help. Suppose if we had to collect all the surrounding info. How would be my extended trace should be configured ? The above information I collected from system_health session. However, when I tried to query the system health session later sometime , I am not able to see the same deadlock info and I am seeing new different deadlocks happening. Not sure why? Is that system health session gets overwritten ? This is one question. 2nd one is , I dont know why ring_buffer query to fetch deadlocks doesn't show up any results.

    currently, I am using below query to get the xml_dead_lock graphs.

    select serverproperty('errorlogfilename')

    --S:\data\MSSQL11.MSSQLSERVER\MSSQL\Log\\ERRORLOG

     

    select serverproperty('errorlogfilename')

    --S:\data\MSSQL11.MSSQLSERVER\MSSQL\Log\\ERRORLOG

    SELECT

    CONVERT(xml, event_data).query('/event/data/value/child::*') as deadlockgraph

    FROM

    sys.fn_xe_file_target_read_file

    ('S:\data\MSSQL11.MSSQLSERVER\MSSQL\Log\system_health*.xel', null, null, null)

    WHERE object_name like 'xml_deadlock_report'

    3rd question, I want to manually configure extended event session to capture deadlocks so that I can have the history of deadlocks for that instance. Could you please suggest all the necessary ACTIONS and events are configured to capture the deadlock.

    This is what I am thinking but please suggest if I am missing anything important ones.

    CREATE EVENT SESSION [xe_deadlocks] ON SERVER

    ADD EVENT sqlserver.xml_deadlock_report

    ADD TARGET package0.event_file

    (SET filename=N'X:\SQLDUMP\xe_deadlocks.xel')

    WITH (MAX_MEMORY=4096 KB,

    EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,

    MAX_DISPATCH_LATENCY=30 SECONDS,

    MAX_EVENT_SIZE=0 KB,

    MEMORY_PARTITION_MODE=NONE,

    TRACK_CAUSALITY=OFF,

    STARTUP_STATE=OFF)

    GO

    And lastly, what kind of questions to be asked to the dev team to get the whole picture of what causing the deadlock.  I don't know whether they can reproduce the issue or not.

    Thanks,

    Sam

  • So, if you're getting lots of deadlocks (massive performance hit by the way, it's not just about the deadlock itself, every time it happens, you're rolling back transactions, causing additional waits, resource use, deadlocks are bad things), chances are good that you're just rolling over the system_health files. You can read about it here. By default there are four files and they're only 5mb in size. So it's likely that you're filling them and they are rolling over.

    Ring buffer is very temporary memory space. It's only going to show you what's happening right now. You won't see any historical information there at all.

    You can set up your own extended events, that's fine. However, I'd do a few other things. First, don't just capture all deadlocks. It sounds like you have a lot going on. I'd at least filter it down to capturing one database at a time. I would consider other filter criteria you could use to further reduce what you're collecting. Second, don't just capture deadlocks. I'd also toss in sql_batch_starting, rpc_starting, sql_batch_completed, rpc_completed. Third, enable causality tracking so that you can easily group all these events together to spot which ones are taking part in the deadlock. All this together with the deadlock in a single Extended Events Session is how I'd do it.

    You may, and I would not start here, need to capture statement level start and complete for batches and rpc. However, if you do this, you really need to think hard about how you can filter it down at capture. Otherwise, you're dealing with a lot of data, maybe even enough to negatively impact performance yourself. Only do this if you're not getting enough information with the batch & rpc starting & completion. But if you do it, please, for your system's sake, put good filtering in.

    I think that covers it all.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Thanks a lot Grant for the guidance. Many thanks.

  • Actually, the code is doing one UPDATE and one DELETE.

    The UPDATE needs an index on LOCK_GROUP_STR  and ROWID_TABLE to make it fast.

    The DELETE needs an index on just the LOCK_GROUP_STR to make it fast.

    Of course, if the definitions of the parameters accurately reflect the column definitions, then you won't be able to add the necessary indexed because that would make  the LOCK_GROUP_STR  and ROWID_TABLE column each an NVARCHAR(4000).

    What that means is that you can't actually build the needed indexes on this table.

    If the LOCK_GROUP_STR  and ROWID_TABLE columns are something else, such as a VARCHAR(), then an index wouldn't work either because that will make for an implicit cast where each of the two columns must be scanned and converted to NVARCHAR(4000) before they can be compared.

    So, the next thing for you do do is to post the CREATE TABLE statement for the "cmx_ors.dbo.C_REPOS_APPLIED_LOCK" table being sure to include all constraints and all indexes so that we can advise you.

    The other thing is that it looks like the code is creating it's own row locking mechanism... I cannot express enough just how wrong a thing that is to do here.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks Jeff. Even I was surprised that , why would they even trying to implement the locking mechanism as sql does take care of it.

    CREATE TABLE [dbo].[C_REPOS_APPLIED_LOCK] (

    [ROWID_LOCK] [nchar](14) NOT NULL

    ,[CREATE_DATE] [datetime2](7) NOT NULL

    ,[CREATOR] [nvarchar](50) NOT NULL

    ,[LAST_UPDATE_DATE] [datetime2](7) NULL

    ,[UPDATED_BY] [nvarchar](50) NULL

    ,[ROWID_TABLE] [nchar](14) NOT NULL

    ,[LOCK_GROUP_STR] [nvarchar](100) NOT NULL

    ,[LOCK_EXCLUSIVE_IND] [bigint] NOT NULL

    ,[JOB_TYPE_STR] [nchar](1) NOT NULL

    ,[LOCK_QUERY_SQL] [nvarchar](2000) NULL

    ,[MODULE_NAME] [nvarchar](100) NULL

    ,[INTERACTION_ID] [nvarchar](100) NULL

    ,CONSTRAINT [PK_APPLIED_LOCK] PRIMARY KEY CLUSTERED ([ROWID_LOCK] ASC) WITH (

    PAD_INDEX = OFF

    ,STATISTICS_NORECOMPUTE = OFF

    ,IGNORE_DUP_KEY = OFF

    ,ALLOW_ROW_LOCKS = ON

    ,ALLOW_PAGE_LOCKS = ON

    ) ON [CMX_DATA]

    ) ON [CMX_DATA]

    GO

    ALTER TABLE [dbo].[C_REPOS_APPLIED_LOCK] ADD CONSTRAINT [DF_C_REPOS_APPLIED_LOCK_CREATE_DATE] DEFAULT(getdate())

    FOR [CREATE_DATE]

    GO

    ALTER TABLE [dbo].[C_REPOS_APPLIED_LOCK] ADD CONSTRAINT [DF_C_REPOS_APPLIED_LOCK_LOCK_EXCLUSIVE_IND] DEFAULT((0))

    FOR [LOCK_EXCLUSIVE_IND]

    GO

    ALTER TABLE [dbo].[C_REPOS_APPLIED_LOCK] ADD CONSTRAINT [DF_C_REPOS_APPLIED_LOCK_JOB_TYPE_STR] DEFAULT('A')

    FOR [JOB_TYPE_STR]

    GO

    ALTER TABLE [dbo].[C_REPOS_APPLIED_LOCK]

    WITH CHECK ADD CONSTRAINT [FK_APPLIED_LOCK_EWGPS3] FOREIGN KEY ([ROWID_TABLE]) REFERENCES [dbo].[C_REPOS_TABLE]([ROWID_TABLE])

    GO

    ALTER TABLE [dbo].[C_REPOS_APPLIED_LOCK] CHECK CONSTRAINT [FK_APPLIED_LOCK_EWGPS3]

    GO

    ---- index definitions

    CREATE NONCLUSTERED INDEX [C_REPOS_ROWID_JOBTYPE_IND] ON [dbo].[C_REPOS_APPLIED_LOCK] (

    [ROWID_TABLE] ASC

    ,[JOB_TYPE_STR] ASC

    )

    WITH (

    PAD_INDEX = OFF

    ,STATISTICS_NORECOMPUTE = OFF

    ,SORT_IN_TEMPDB = OFF

    ,DROP_EXISTING = OFF

    ,ONLINE = OFF

    ,ALLOW_ROW_LOCKS = ON

    ,ALLOW_PAGE_LOCKS = ON

    ) ON [CMX_DATA]

    GO

    CREATE NONCLUSTERED INDEX [NI_APPLIED_LOCK_EWGPS3] ON [dbo].[C_REPOS_APPLIED_LOCK] ([ROWID_TABLE] ASC)

    WITH (

    PAD_INDEX = OFF

    ,STATISTICS_NORECOMPUTE = OFF

    ,SORT_IN_TEMPDB = OFF

    ,DROP_EXISTING = OFF

    ,ONLINE = OFF

    ,ALLOW_ROW_LOCKS = ON

    ,ALLOW_PAGE_LOCKS = ON

    ) ON [CMX_INDX]

    GO

    CREATE NONCLUSTERED INDEX [NI_APPLIED_LOCK_SBL3R8] ON [dbo].[C_REPOS_APPLIED_LOCK] ([LOCK_GROUP_STR] ASC)

    WITH (

    PAD_INDEX = OFF

    ,STATISTICS_NORECOMPUTE = OFF

    ,SORT_IN_TEMPDB = OFF

    ,DROP_EXISTING = OFF

    ,ONLINE = OFF

    ,ALLOW_ROW_LOCKS = ON

    ,ALLOW_PAGE_LOCKS = ON

    ) ON [CMX_INDX]

    GO

  • I think that three things need to be done before anything else....

    1. Add an index to support the lookup for the update.  It should be a unique index consisting of ROWID_TABLE, LOCK_GROUP_STR, and ROWID_LOCK (to support the uniqueness even though not used directly).
    2. Modify the index that looks like this (which is currently being ignored by the DELETE)...

      CREATE NONCLUSTERED INDEX [NI_APPLIED_LOCK_SBL3R8] ON [dbo].[C_REPOS_APPLIED_LOCK] ([LOCK_GROUP_STR] ASC)

      ... to look like this...

      CREATE UNIQUE NONCLUSTERED INDEX [NI_APPLIED_LOCK_SBL3R8] ON [dbo].[C_REPOS_APPLIED_LOCK] ([LOCK_GROUP_STR] ASC,ROWID_LOCK)

      Yes, I know in both cases (This one and the new one suggested) that ROWID_LOCK in auto-magically included but adding it to the key list will allow you to make the index unique and that may trick the optimizer into preventing full scans.

    3. Speaking of ignored indexes, I think the developers need to "right size and match" the passed parameters to match the table so the implicit casts don't mess with the SARGability of the indexes that are coming into play for the UPDATE (once the new index is in place) or the DELETE.  I don't have the actual execution plan to go by but I think them doing things like passing a parameter as an NVARCHAR(4000) on something that's (for example) an NCHAR(14) or NVARCHAR(100) will prevent even the new and modified indexes above from actually being used for the adhoc UPDATE and DELETE statements appearing in your deadlocks.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks a lot Jeff. Many many thanks.

    I have few follow-up questions.

    1. Why a unique non-clustered index ? why not just a non-clustered index for [NI_APPLIED_LOCK_SBL3R8]. Because I don't know the nature of the data if those columns has repeated values.
    2. Will adding a retry logic in the existing code (lets say ) for 3 times , will it help ? or is there any problem in doing that?
    3. Some common advise we see a lot on internet when it comes to deadlocks is, changing the order of the objects in same order ? is  it relevant in this particular case? If I am asking any irrelevant question, please excuse me. This question was out of curiosity.
    4. Using the above deadlock graph is there a way/dmv to pull out the plans from cache for these particular sql statements (DELETE & UPDATE) ? Basically, I want to see if there are any missing indexes for the plans ?
  • The data in the deadlock graph includes the sqlhandle. You can use this against either the DMVs or Query Store (if it's enabled) to retrieve query information. The sqlhandle is the same as the sql_handle in the DMVs. Just query as necessary.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • vsamantha35 wrote:

    1. Why a unique non-clustered index ? why not just a non-clustered index for [NI_APPLIED_LOCK_SBL3R8]. Because I don't know the nature of the data if those columns has repeated values.

    To be sure, the leading column of LOCK_GROUP_STR is guaranteed to NOT be unique and that’s why I added the guaranteed-to-be-unique ROWID_LOCK column, which is also the “clustering key”. That means that the index already contains the ROWID_LOCK column and there will be very low cost in explicitly adding it to the index.

    The reason that I’m suggesting that the index be forced to be unique in such a manner is because it may give the optimizer enough of a hint to not only use the index but to, perhaps, make it less likely to do a scan. There’s no guarantee of it not doing a scan and there’s no guarantee that a non-unique index will always to a scan. We’re just trying to hedge some bets. It may be a totally futile effort to force uniqueness but it’s a nearly zero cost effort to try.

    vsamantha35 wrote:

    2. Will adding a retry logic in the existing code (lets say ) for 3 times , will it help ? or is there any problem in doing that?

    I’d save that for a last ditch effort if nothing else works. The things that I recommended are a much better use of time. For example, I believe that it’s absolutely imperative to find and fix whatever code is spawning the god-forsaken queries that default to (for example) assigning totally inappropriate NVARCHAR(4000) (and other equally inappropriate data-types) to the passed parameters so that they are made identical to the data-types in the table to avoid implicit casts. Not fixing that problem may make the use of proper indexing to speed up the actions of the UPDATE and DELETE code involved in the deadlocks to seriously decrease the chances of deadlocks totally impossible.

    I'll also state that retries after a deadlock in this situation may actually undo the effect that they intended for this home-grown attempt at proper locking.

    vsamantha35 wrote:

    3. Some common advise we see a lot on internet when it comes to deadlocks is, changing the order of the objects in same order ? is it relevant in this particular case? If I am asking any irrelevant question, please excuse me. This question was out of curiosity.

    I’ve found that such an effort is a total waste of time and don’t know how anyone would think that such a thing has even a remote chance of actually solving deadlock issues.

    For example, in a previous company that I worked for, there was code to get a “Next ID” from a table for various tables (they used it instead of an IDENTITY column). It was used everywhere. It was the only code that did anything with the “NextID” table and so the table was always affected “in the same order” and, yet, there was an average of more than 400 deadlocks in an 8 hour period with not-no-rare spikes to more than 4000 deadlocks in that same 8 hour period. Inside the explicit transaction of the code, a SELECT was used to get the “Next ID” and that was followed by an UPDATE to increment the value in the table so that the “Next ID” was ready to consume. Even though the code always did things in exactly the same order, it was a near guarantee of deadlocks for any and all concurrent use.

    Once we fixed the code that got the “Next ID”, the deadlocks disappeared and never came back even though the use of the code eventually increased by two orders of magnitude and has never caused a deadlock since we fixed it back in 2006.

    Just in case it comes up (and it probably will), one of the “fixes” for such deadlock problems is to do a little trick with “sp_GetAppLock” to prevent more than one instance of the code from running. The thought is that if it can only run once at a time, there will never be a deadlock and that’s actually what will happen. It will make the deadlocks go away… but at a great cost.

    To be absolutely clear, that method absolutely destroys all chances of concurrency and forces the serialization of the runs. In other words, although you will have solved the deadlock problem, you will also have MASSIVE blocking ( having the effect of paralyzing many and sometimes all CPUs)  and MASSIVE slowdowns. That’s NOT a speculation on my part. I went through that silliness at the company I currently work for about 2 years ago. The actual fix was to rewrite the code they were running so that it would run much faster and also lock a lot fewer resources by avoiding totally unnecessary index scans and more effective use of SARGable criteria.

    The use of sp_GetAppLock for this type of thing is (there are no other words for it) ignorant, stupid, and lazy.

    vsamantha35 wrote:

    4. Using the above deadlock graph is there a way/dmv to pull out the plans from cache for these particular sql statements (DELETE & UPDATE) ? Basically, I want to see if there are any missing indexes for the plans ?

    Grant answered this above. The problem is that it won’t show you the values of the parameters if you need them to troubleshoot especially if more than one plan is somehow generated. Such plans also usually only show what the “planned” plan was and will frequently only show estimated rowcounts rather than actual row counts, etc.

    The only way (IMHO but I may be ignorant of a better way) to actually determine what actually happened is to capture the code and the values of the parameters used and then run the code with those exact parameters to view the “ACTUAL Execution Plan”.

    That means that you’ll either need to setup a “server side” SQL Profiler run or setup an Extended Events session to capture that actual information and run it in SSMS to get the ACTUAL plan used during execution.

    I can’t stress this enough, though… the developers MUST make it so that the data-types of the parameters being passed exactly match those of the table or we’re just trying to piss up a rope on a windy day here.

    And, finally, I'll also restate that all of our efforts to resolve this deadlocking issue may be all for naught because of the home-grown code attempt to control locking but it IS worth the try.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks for the detailed explanation. I will check with the app team if they can really reproduce issue.

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply