• Cupidking,

    I apologize for the delay in the response; I wanted to ensure that I was thorough in my research and testing to give you my best opinion of the answer. With that said, keep in mind that this is an opinion and based on my independent testing.

    I’ve tried researching the subject to see if I could find any evidence that would support or deny the requirement of the stats of the indexes to be updated.

    My research has indicated stats could be required to be updated in SQL Server 2000 (especially prior to release of SP 2); however, to get an incorrect reading you’d have to update a table and then query the count within ‘sysindexes’ in the SAME transaction. This was resolved by Microsoft in release of SP 2 for SQL Server 2000. There was another anomaly within that Service Pack release that could potentially create a miscount of rows also. It was more particular to the ‘sysindexes.rowmodctr’ column. I am including the links to those two Knowledge Base articles for your reference; if you are trying to use this script on SQL Server 2000 then these links may help:

    KB 308822: http://support.microsoft.com/kb/308822

    KB 317847: http://support.microsoft.com/kb/317847

    In either case that doesn’t apply to my script because the script was designed for use with SQL Server 2005, not SQL Server 2000.

    With that said; I am a firm believer in that nothing is perfect and there must be a way to break the script (well, in this case to make a miscounting of the records ). To do this I had attempted a few different things and all had passed with 100% accurate results. The best method I could think of to create a miscount was to create a test table and load 1,000,000 records in it and verify the count was accurate, all within the same transaction (SQL 2000 problem would’ve resulted in a count of 0); so this confirms that previous problem had been resolved. I then used a script that would create the test table and load 1,000,000 records; then delete 300,000+ records, and then immediately perform a record count. NOTE: All actions were performed within as few commits as possible, and particularly the delete of records and getting a record count was done in ONE commit transaction. My results came back with 100% accuracy. I am including 2/3 of the script (I left out the code for the ‘Get Record Count’ because you obviously will have it already).

    From my testing and results I can only deduce that the stats do not need to be updated manually if they are a factor. Again, this is all informal testing and I may very well be incorrect, but I have nothing that can prove the need to update the stats…by all means I don’t believe it’s impossible for the miscount to happen, and maybe someday I might come across a method that does indicate an incorrect count and I’ll update the script or submit a new script to address this. If you can provide any additional information, specific settings or series of events that could replicate this in SQL Server 2005 then I’d be happy to try to replicate it and deduce why this is occurring.

    The following is the script I used for testing. For your (and anyone else that comes across this topic) reference, I had used SQL Server 2005 Developer Edition (Product Version: 9.00.3054.00, Product Level: Service Pack 2, IsClustered = 0).

    --***START GET RECORD COUNT BREAKING SCRIPT***

    USE [TEST_DB];

    GO

    --Detect if table already exists;

    --if so then delete existing table

    IF OBJECT_ID('dbo.abc_TestCount', 'TABLE') IS NOT NULL

    DROP TABLE dbo.abc_TestCount;

    GO

    --Create new table (should not be

    --included in index since it is new)

    CREATE TABLE abc_TestCount

    (

    --Ensure to avoid using Unique

    --(Unique could trigger indexing automatically)

    TestID INT NOT NULL,

    TestText nvarchar(25) NOT NULL,

    TestDate datetime NOT NULL,

    );

    --Declare variables

    DECLARE @tmpCount INT

    DECLARE @intCount INT

    DECLARE @stTest nvarchar(25)

    DECLARE @stTestMod nvarchar(25)

    DECLARE @dtDate datetime

    DECLARE @dtDateMod datetime

    --Set variables

    SET @tmpCount = 0

    SET @intCount = 1

    SET @stTest = 'Testing # '

    SET @dtDate = GETDATE()

    --Perform Loop to insert code

    --1 million records should be sufficient

    WHILE @tmpCount < 1000000

    --unfortunately must include BEGIN/END

    --because of loop, and can't possibly justify

    --'GetRecordCount' within loop over 1 million times

    --Must keep some realistic measurements/constraints

    BEGIN

    SET @tmpCount = @tmpCount + 1

    SET @stTestMod = @stTest + CAST(@intCount as nvarchar(25))

    SET @dtDateMod = DATEADD(day, 1, @dtDate)

    SET @intCount = @intCount + 1

    INSERT INTO dbo.abc_TestCount

    VALUES (@tmpCount, @stTestMod, @dtDateMod);

    END

    --Immediately deleting 300,000+ records before record count

    --This should simulate an 'out-of-date' index

    --if an index somehow snuck by

    DELETE FROM dbo.abc_TestCOUNT

    WHERE TestID > '305675' AND TestID < '623456'

    --Now you would put the T-SQL to Get the Record Count

    --Remember don't use any additional commits/GO commands

    --between this T-SQL and the Get Record Count.

    ~ Without obstacles, you cannot progress ~
    http://sqln.blogspot.com/