SQLServerCentral

Two Things I Would Change in SSISDB Setup


https://www.sqlservercentral.com/Forums/Topic1772994.aspx

By bkubicek - Monday, March 28, 2016 2:57 PM

Comments posted to this topic are about the item Two Things I Would Change in SSISDB Setup
By nick.mcdermaid - Monday, March 28, 2016 8:59 PM

Great information - this kind of practical advice is much more valuable than your standard regurgitated walkthroughs. A bit off topic but long overdue for enhancement is the SQL Agent. Something as simple as folders would make management of this so much easier.
By mtassin - Tuesday, March 29, 2016 12:05 AM

I completely agree... when I look at some of my jobs lists... I get lost trying to identify what I'm looking for.

Let's not even get into SSRS Subscription schedules.... Nothing like 500 jobs with nothing but a GUID for a name.
By Iwas Bornready - Tuesday, March 29, 2016 12:06 AM

Thanks for the article.
By ianrogerguy - Tuesday, March 29, 2016 12:23 AM

Hi,

The default 365 days is way too many once you have a busy server, i remember when i fell foul of that oneSad

to alter the retention period i use this method

EXEC catalog.configure_catalog RETENTION_WINDOW, 7;
GO

there is also this one for changing the max versions of a project.

EXEC catalog.configure_catalog MAX_PROJECT_VERSIONS, 4;
GO

Ian
By Orlando Colamatteo - Tuesday, March 29, 2016 12:32 AM

Practical, useful information from the front lines. These settings are analog to setting Max Memory on a new instance. They certainly caught my team and I off guard when first implementing SSISDB. Thank you for sharing.
By stan.geiger - Tuesday, March 29, 2016 1:11 AM

This topic is one of the reasons in a presentation that I often do on the SSIS Catalog, I always begin with how to install the SSISDB and to reset the properties for retention. I make everybody in the room repeat it for emphasis.
By mtassin - Tuesday, March 29, 2016 1:11 AM

And thanks to Ben for the tip about permissions.

In my environments we just blanket grant sa privileges for running the maintenance scripts like this and syspolicy_purge_history so it slipped by my attention.

I also quit worrying about setting the retention window once I moved to my script because I could just pass it in, and well it was just easier to pass it in.
By bkubicek - Tuesday, March 29, 2016 1:16 AM

Mark thanks again for writing your article and script. It has been very helpful to me.

Ben
By Daario - Tuesday, March 29, 2016 1:28 AM

This is another thing you may want to do upon setup:

ALTER DATABASE [SSISDB] SET RECOVERY SIMPLE;



The transaction log can get a bit unruly, and Point-in-time recovery has limited usefulness in this database.

--
Winter is coming
By mtassin - Tuesday, March 29, 2016 1:34 AM

Daario (3/29/2016)
This is another thing you may want to do upon setup:

ALTER DATABASE [SSISDB] SET RECOVERY SIMPLE;



The transaction log can get a bit unruly, and Point-in-time recovery has limited usefulness in this database.

--
Winter is coming


Good point
By dalejones - Wednesday, March 30, 2016 7:44 AM

We were experiencing this issue this past weekend. We have a large volume of SSIS jobs that run thoughout the day (24x7). During the nightly 1:00am run of [SSIS Server Maintenance Job], the transaction log was filling up our disk drive and impacting other more critical processes. The issue is the TRY/CATCH containing the delete loop. That's one huge transaction that was causing the problem. Our retention window is set to 1 day and we were still having issues.

I addressed this in a similar manner by making the following changes:
- Cloned [internal].[cleanup_server_retention_window] to my own version of the stored procedure.
- Added a variable for the max number of the rows to delete. I set this to 5000 so we delete 5000 rows and get out with reduced transaction log growth.
- Added a variable to accumulate the number of rows deleted during the loop iterations.
- When the max number of rows to delete is exceeded, I BREAK out of the WHILE loop. There are actually two WHILE loops in the IF-THEN-ELSE where I check if the max number of rows is exceeded.
- Lowered the @delete_batch_size to 100 from 1000 since we're dealing with cascading deletes.
- Added a PRINT statement at the end of the TRY/CATCH so it's present in the job history log.
- The job was originally running once a night at 1:00am. I changed the schedule to run every minute between 1:00am-2:00am. This results in the transaction log being hit in smaller chunks with breaks in between job runs.
- Increased the size of our disk drive from 50GB to 300GB for more breathing room.
- Set the size of the SSIS data and log files to 40GB each to help address any file growth delays that were occurring.
- The database is already in SIMPLE recovery mode, so no change needed there.
By Koen Verbeeck - Sunday, April 3, 2016 11:49 PM

I would also change the default logging level. Default logs too much information.
Luckily SQL 2016 has custom logging levels + the ability to set them as the default! :-D
By BobMcC - Tuesday, October 9, 2018 6:02 AM

Now that two years has passed and we now have SQl 2016 and beyond; have they improved the cascading delete problem?
By bkubicek - Tuesday, October 9, 2018 6:05 AM

BobMcC - Tuesday, October 9, 2018 1:02 PM
Now that two years has passed and we now have SQl 2016 and beyond; have they improved the cascading delete problem?

That is a really good questions. I haven't had a chance to test it out. I sure hope so.

Ben
By mtassin - Thursday, January 10, 2019 11:57 PM

No they haven't... at least not as of SQL 2016
By gbritton1 - Friday, January 11, 2019 5:55 AM

bkubicek - Monday, March 28, 2016 9:57 PM
Comments posted to this topic are about the item Two Things I Would Change in SSISDB Setup

At least with SQL Server 2016 you can now put SSISDB in an AG. SQL Server 2016 HA Series Part 2 – SSIS and Availability Groups. That's important for us since with 2012 we had to deploy changes individually to all replicas. OTOH if you do that, you have to leave RECOVERY = FULL.