Two Things I Would Change in SSISDB Setup

  • Comments posted to this topic are about the item Two Things I Would Change in SSISDB Setup

  • Great information - this kind of practical advice is much more valuable than your standard regurgitated walkthroughs. A bit off topic but long overdue for enhancement is the SQL Agent. Something as simple as folders would make management of this so much easier.

  • I completely agree... when I look at some of my jobs lists... I get lost trying to identify what I'm looking for.

    Let's not even get into SSRS Subscription schedules.... Nothing like 500 jobs with nothing but a GUID for a name.



    --Mark Tassin
    MCITP - SQL Server DBA
    Proud member of the Anti-RBAR alliance.
    For help with Performance click this link[/url]
    For tips on how to post your problems[/url]

  • Thanks for the article.

  • Hi,

    The default 365 days is way too many once you have a busy server, i remember when i fell foul of that one:(

    to alter the retention period i use this method

    EXEC catalog.configure_catalog RETENTION_WINDOW, 7;

    GO

    there is also this one for changing the max versions of a project.

    EXEC catalog.configure_catalog MAX_PROJECT_VERSIONS, 4;

    GO

    Ian

  • Practical, useful information from the front lines. These settings are analog to setting Max Memory on a new instance. They certainly caught my team and I off guard when first implementing SSISDB. Thank you for sharing.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • This topic is one of the reasons in a presentation that I often do on the SSIS Catalog, I always begin with how to install the SSISDB and to reset the properties for retention. I make everybody in the room repeat it for emphasis.

  • And thanks to Ben for the tip about permissions.

    In my environments we just blanket grant sa privileges for running the maintenance scripts like this and syspolicy_purge_history so it slipped by my attention.

    I also quit worrying about setting the retention window once I moved to my script because I could just pass it in, and well it was just easier to pass it in.



    --Mark Tassin
    MCITP - SQL Server DBA
    Proud member of the Anti-RBAR alliance.
    For help with Performance click this link[/url]
    For tips on how to post your problems[/url]

  • Mark thanks again for writing your article and script. It has been very helpful to me.

    Ben

  • This is another thing you may want to do upon setup:

    ALTER DATABASE [SSISDB] SET RECOVERY SIMPLE;

    The transaction log can get a bit unruly, and Point-in-time recovery has limited usefulness in this database.

    --

    Winter is coming

  • Daario (3/29/2016)


    This is another thing you may want to do upon setup:

    ALTER DATABASE [SSISDB] SET RECOVERY SIMPLE;

    The transaction log can get a bit unruly, and Point-in-time recovery has limited usefulness in this database.

    --

    Winter is coming

    Good point



    --Mark Tassin
    MCITP - SQL Server DBA
    Proud member of the Anti-RBAR alliance.
    For help with Performance click this link[/url]
    For tips on how to post your problems[/url]

  • We were experiencing this issue this past weekend. We have a large volume of SSIS jobs that run thoughout the day (24x7). During the nightly 1:00am run of [SSIS Server Maintenance Job], the transaction log was filling up our disk drive and impacting other more critical processes. The issue is the TRY/CATCH containing the delete loop. That's one huge transaction that was causing the problem. Our retention window is set to 1 day and we were still having issues.

    I addressed this in a similar manner by making the following changes:

    - Cloned [internal].[cleanup_server_retention_window] to my own version of the stored procedure.

    - Added a variable for the max number of the rows to delete. I set this to 5000 so we delete 5000 rows and get out with reduced transaction log growth.

    - Added a variable to accumulate the number of rows deleted during the loop iterations.

    - When the max number of rows to delete is exceeded, I BREAK out of the WHILE loop. There are actually two WHILE loops in the IF-THEN-ELSE where I check if the max number of rows is exceeded.

    - Lowered the @delete_batch_size to 100 from 1000 since we're dealing with cascading deletes.

    - Added a PRINT statement at the end of the TRY/CATCH so it's present in the job history log.

    - The job was originally running once a night at 1:00am. I changed the schedule to run every minute between 1:00am-2:00am. This results in the transaction log being hit in smaller chunks with breaks in between job runs.

    - Increased the size of our disk drive from 50GB to 300GB for more breathing room.

    - Set the size of the SSIS data and log files to 40GB each to help address any file growth delays that were occurring.

    - The database is already in SIMPLE recovery mode, so no change needed there.

    -Dale Jones

  • I would also change the default logging level. Default logs too much information.

    Luckily SQL 2016 has custom logging levels + the ability to set them as the default! 😀

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • Now that two years has passed and we now have SQl 2016 and beyond; have they improved the cascading delete problem?

  • BobMcC - Tuesday, October 9, 2018 1:02 PM

    Now that two years has passed and we now have SQl 2016 and beyond; have they improved the cascading delete problem?

    That is a really good questions.  I haven't had a chance to test it out.  I sure hope so.

    Ben

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply