Table Size out of control..

  • In the dev environment there is a table that gets loaded with verbose log data by a DOT NET app.  The only thing that I care to retain in that table are entries designated as ERROR level entries.  There are a total of nearly 2 million entries into this table every hour.

    What do you suggest is the best way to handle this ?

    ? triggers -  I think the table is too busy for this.

    Scheduled job - I think this is a maybe but with a table so large I would prefer to truncate then delete non-error types, and it is a schedule rather than an on condition action .

    Alter - I'm just starting to learn about this and not sure what type or how to identify the condition to trigger it.

    The other part of this is how to manage those 10k error records.  Once the history table reaches 10k records, ??remove 500??

    Any help would be greatly appreciated. Thanks.

  • Changing the .NET app so that it only loads the data which is required is not possible, I presume?

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

  • If nested triggers are disabled, just create an instead of insert trigger which filters the inserted table on the conditions you want. eg:

    CREATE TRIGGER TR_I_YourTable
    ON YourTable
    INSTEAD OF INSERT
    AS
    BEGIN
    INSERT INTO YourTable
    SELECT YourColumns
    FROM inserted
    WHERE YourColumn LIKE '%ERROR%';
    END
    GO

    If nested columns are enabled you will also need to play around with TRIGGER_NESTLEVEL().

    As Phil has mentioned, the best option is to sort out the .Net code.

     

    • This reply was modified 2 years, 8 months ago by  Ken McKelvey.
    • This reply was modified 2 years, 8 months ago by  Ken McKelvey.
  • Agree with Ken.  Use a trigger to only write the data you want in the table to begin with.

    The only thing Ken left out was a:

    SET NOCOUNT ON

    at the start of the trigger, for efficiency.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • Phil - Correct.  Devs want it all in dev.

    Ken - AWESOME  "INSTEAD OF"  Why didn't I think of that.  Is a trigger to delete  ##  history table records WHEN (sys.partitions.rows for that table ) shows that the record count exceeds 10k ?  IS there a better way

    Scott - Thanks.  Good reminder.

  • Budd wrote:

    Phil - Correct.  Devs want it all in dev.

    Ken - AWESOME  "INSTEAD OF"  Why didn't I think of that.  Is a trigger to delete  ##  history table records WHEN (sys.partitions.rows for that table ) shows that the record count exceeds 10k ?  IS there a better way

    Scott - Thanks.  Good reminder.

    If you're on SQL 2016, check if page compression will save you significant space instead.  10K rows is actually rather small nowadays for SQL Server, I wouldn't worry about it until at least 200K or more rows, unless they're (very) wide rows.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • Budd wrote:

    Phil - Correct.  Devs want it all in dev.

    Then add a switch to the code … this sort of thing:

    IF (DEV)
    DEBUGMODE="Verbose"
    ELSE
    DEBUGMODE="Standard"

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

  • ScottPletcher wrote:

    Budd wrote:

    Phil - Correct.  Devs want it all in dev.

    Ken - AWESOME  "INSTEAD OF"  Why didn't I think of that.  Is a trigger to delete  ##  history table records WHEN (sys.partitions.rows for that table ) shows that the record count exceeds 10k ?  IS there a better way

    Scott - Thanks.  Good reminder.

    If you're on SQL 2016, check if page compression will save you significant space instead (using sys.sp_estimate_data_compression_savings).  Also, 10K rows is actually rather small nowadays for SQL Server, I wouldn't worry about it until at least 200K or more rows, unless they're (very) wide rows.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply