January 2, 2015 at 9:52 am
On our production server we have 13 databases with combined size of 1.5 TB, not including log files. But none of them is OLTP nor strictly data warehouse, they all are kind of hybrid between these two in various degree.
11 databases are replicated to another server. Usually latency is not so bad, between 5-10 sec, but at times when we have heavy loads, it can reach 1 min and over. Distribution database is located at Publisher server, and I am trying to find the ways to improve it's performance. One of the ideas is to make it In-Memory OLTP.
I have not tested it yet. But dos it make a sense? Will it reduce latency? Are there any technical limitations on it?
Thanks
January 3, 2015 at 5:06 am
It's not something I've tried, but I suspect it won't work. Go through all the limitations currently in force on tables that can go into memory and I'm pretty sure you're going to see many instances that will prevent doing this with the Distribution database.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 4, 2015 at 9:59 am
SQL Guy 1 (1/2/2015)
11 databases are replicated to another server. Usually latency is not so bad, between 5-10 sec, but at times when we have heavy loads, it can reach 1 min and over. Distribution database is located at Publisher server, and I am trying to find the ways to improve it's performance. One of the ideas is to make it In-Memory OLTP.
Rather than knee-jerk grabbing for a random solution, maybe start by investigating *why* there is latency. I somehow doubt you're getting latch-based insert contention (which is one of the main things Hekaton was designed to help with)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 5, 2015 at 4:07 pm
Some of the issues I've seen with Dist latency revolve around some of these issues:
1. The DB isn't indexed properly so consider adding indexes to a couple of the tables. Some quick profiling and exec plans can tell you what you'll need.
2. Index maint not being done on the DB. Dist tables frag like any others. Make sure it's part of your routine.
3. Keeping too many days of history. We don't want to have to reinit the subr because it times out and invalidates before we can get it fixed so we keep 3-5 days in the tables. When you've got a lot of heavy DBs that can start to weigh on the system. Couple that with the indexes and you can have perf problems.
4. Large amounts of updates/deletes. Often times these are adhoc ops and every single row being updated/deleted gets pushed into the dist DB. Consider making it an SP call instead and set publ to send the SP call to the dist DB instead of all the rows. That way the SP will be run on the subr instead of clogging up dist with all those separate commands.
5. Split data/log files.
Remember, Dist is a DB like any other and general perf guidelines apply to it the same as any other.
Watch my free SQL Server Tutorials at:
http://MidnightDBA.com
Blog Author of:
DBA Rant – http://www.MidnightDBA.com/DBARant
January 6, 2015 at 3:17 pm
Most of our databases are properly indexed, there are of course some exclusions, but generally they don't affect replication latency. Data/log files are split on different physical hard disks, and we keep history in distribution for 72 hours, I think that is default. Thanks for interesting advise, I never thought about placing distribution into defragmentation process, will try it.
And yes, we have large amounts of updates and inserts, no deletes. They are part of large ETL processings and they are exactly the main reason for replication latency. They are a twisted combination of SP, BCP and SSIS. And something comes from the Linked Servers and even from applications.
And as I understand you correctly, you are recommending to execute them on subscriber independently from the publisher ?
Hmmmm... That's not so easy task. First, I have to exclude some tables from the published articles list. Now we have articles on all the tables that have PK, we don't exclude anything. And there are some tables that can be updated from EPL and adhoc queries. I think I can bring in more mess rather than solution. Maybe I would better stay with increased latency at times and not change anything?
This is the reason why I asked a question about placing dist into memory at my initial post.
January 6, 2015 at 3:52 pm
You've managed to misunderstand me. I was talking about indexes in Dist, not your user DBs.
Watch my free SQL Server Tutorials at:
http://MidnightDBA.com
Blog Author of:
DBA Rant – http://www.MidnightDBA.com/DBARant
January 7, 2015 at 1:34 am
SQL Guy 1 (1/6/2015)
This is the reason why I asked a question about placing dist into memory at my initial post.
The thing is, in-memory solves a particular problem - high speed inserts (as in thousands a second) and large-volume, high complexity queries (via native compiled procedures). It's not a general performance silver bullet and can in some cases make the performance worse.
Before you decide on a solution, identify the problem. The distribution DB is slow. Why? What specifically is slow and what is making that particular process slow? Once you have answers to those, you're in a good position to chose a solution which solves the problem
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 7, 2015 at 7:54 am
Yep, that's why I have him a list of things that typically slow down dist.
Watch my free SQL Server Tutorials at:
http://MidnightDBA.com
Blog Author of:
DBA Rant – http://www.MidnightDBA.com/DBARant
Viewing 8 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply