Permanent Table, Transient Data

  • Hi

    I have a feeling I am in danger of stepping on something soft and smelly. Before I take another step, I have decided to seek some advice from some SQL Sages. First, some background.

    I have a process which requires transactions from two different tables to be matched and the matching to be recorded in a third. Specifically, I am developing a Bank Reconciliation app and I need to match transactions from the General Ledger of our JDE financial system against transactions imported from the Bank.

    Thus, I have separate Bank and GL transactions tables, along with a matches table which records the matching association. My user interface provides a matching function which is linked to a "MatchPending" table. A user selects records from both sides to match, with each selected transaction being added into the Pending table. If the matching is saved, the GL and Bank transactions are updated to record them as matched, and the matching details are inserted in to the matches table. Pending records can be specifically removed by the user, and any records that remain when the form is closed, will be deleted from the table.

    The problem I am having relates to the Pending table and multiple users - occasionally, when several users are matching at the same time, a non-specific database error will be returned by the user interface. If there is only one user on the system, the error does not occur. Records in the Pending table are suitably structured to ensure that each user only alters their own records so I'm not suffering from record locking or contention issues. Records have a Bank Account Number and User ID and only one user will be operating on a single account.

    The error is popping up when a user is attempting to add a new record into the Pending table and I am beginning to suspect that the method I am using may not be the most robust, or that the table requires a different maintenance method given the transient nature of the data. One of the reasons for these conclusions is elimination, the other is the fact that I can no longer force the error when I use the test system. I'll explain.

    On Friday, from my test system, I could generate the error. Today, Monday, I can't. On Friday, I removed two indexes (FinancialYear and FinancialPeriod) from the Pending table as they were not relevant. I also ran sp_updatestats on that database. Over the weekend, a maintenance plan performed an index rebuild. I have asked the users to re-test the interface to see if they no longer receive the errors as well, but I won't hear back until later today, early tomorrow.

    I can see several alternative paths to follow, but this leads me back to my fear of stepping on something soft and squishy. If anyone has been in the same or similar position with a table and data of this nature, I would be happy to hear what you have to say.

    My thanks in advance.

    Regards

    Rowan

  • Without seeing the code, the structure, the error messages, something other than vague descriptions, I'm hard pressed to suggest something meaningful. The nature of the problem, works fine with one person, errors with more than one, suggests contention of some type, possibly deadlocks, possibly just blocking. But that's as far as I can guess with completely inadequate knowledge.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Hi Grant

    Thanks for taking the time to respond. Rather than looking for a specific solution, I guess what I really need is advice on how to handle this type of scenario - ie a fixed table with transient data. I agree, a proper analysis of the problem requires more than a simple description.

    I think contention is almost certainly the cause, and I suspect it involves the indexing - given the short duration of records, possibly fragmentation. I have been doing a lot of reading over the last 24 hours and I am beginning to think that it is related to the auto updating of statistics, set to true on that database. The fact that I was unable to replicate the error during yesterday's testing but able to cause it on Friday suggests something has changed. The most significant thing I can currently think of is the weekly database maintenance, rebuilding the indexes being a part of that.

    I had originally done it this way because it was simpler to provide editable data to a form using a table. But given the problem I am having, I suspect that I might have to change this technique or alternatively change the management of the table. So I'm hoping that someone with experience with similar data will have some advice, or some one with a better technical understanding of SQL than me (not at all hard) will similarly be able to advise.

    Cheers

    Rowan

  • In terms of guidance, the detailed answer is to get my 2012 book on query tuning. The more general answer is to capture the wait statistics before and after you run your process, several times. Use sys.dm_os_wait_stats. You can query it before and after the process runs and it will tell you what the system is waiting on. That helps you understand the issues. Then, you can also capture query metrics using (since you're in 2008) trace events (if you were on 2012 or felt more confidence, I'd suggest using extended events, they're generally safer than trace events) to see which queries are using which resources. You can correlate that to the wait statistics and work on the queries that are causing the most waits. Work basically involves looking at the queries for common code smells, execution plans for issues with indexes & statistics and more code smells.

    And, it does sound like you might be hitting statistics issues. It's possible your statistics are out of date (auto update of statistics is a good thing, don't turn it off unless you can prove, beyond doubt, that it's causing you problems) because the auto update is fairly conservative (500 rows modified + 20% of the number of rows in the table before an update fires). You can try running UPDATE STATISTICS WITH FULL SCAN against the tables or indexes used by your queries to see if that addresses the issue.

    Hopefully that's helpful.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply