October 24, 2014 at 3:01 pm
ramana3327 (10/23/2014)
If there are 2 SQL Instances (Replicas) running on the same node, can they being part of the same AG?
No, all replicas in an always on group must reside on separate cluster nodes.
ramana3327 (10/23/2014)
Is there a particular quorum model to create the cluster for AG on VMware?Thanks in advance
Whether physical or virtual you'll base your witness type on the number of cluster nodes. For an odd number no witness required. For an even number use a fileshare witness.
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
October 24, 2014 at 3:15 pm
ramana3327 (10/23/2014)
Is it possible to have AG created across multiple domains?
Are the domains in the same AD forest?
An always group is only supported across cluster boundaries for migration purposes only,
ramana3327 (10/23/2014)
Is it possible to have AGs created across VMs on same physical host?
Cluster in a box is possible, but pointless for anything other than test purposes.
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
October 24, 2014 at 4:42 pm
ramana3327 (10/23/2014)
Yes but now it is hitting the production database performance (OLTP). That is the reason they want the read only copy for the reporting source
It would be far easier and cost effective to analyze the query(ies) that support the report and fix them so they run more quickly and with fewer resources. 3 orders of maginitude performance increases and resource usage decreases are more common than you think if you just spend some time working the code and maybe some indexes.
Like Gail suggested, moving the data to a read only copy of the data is going to buy you very, very little especially when compared to how much the cost will be to do so.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 27, 2014 at 7:40 am
Hurray on the upgrade to SQL2014 !
on the other hand, on top of the suggestions of the other repliers:
Did you ever check any variant of snapshot isolation level to avoid (dead)locking ?
Snapshot Isolation in SQL Server
Implementing Snapshot or Read Committed Snapshot Isolation in SQL Server: A Guide[/url]
I know, I just couldn't keep off
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
November 15, 2014 at 5:49 pm
ramana3327 (10/23/2014)
Thanks for the replies.Yes, they are ready for the budget. They want 3 replicas for some other purpose also. One is specially for reports.
Some how now just one simple report is taking minimum 12 hr time.
The row count in that report is 28000 and byte count is 790000.
It is taking time data retrieval for that report is 32000000 ms, Time processing is 4940ms & Time of rendering 26000.
The report server is getting data from the OLTP linked server.
We need to run that report for a particular time. i.e. every 24 hrs one time.
We are not optimizing that report. So we need to have other source
I know this post is a good 3 weeks old but thought I'd echo Gail's good warning. You folks are getting ready to spend a whole lot of money and it's not going to help much. If the report is taking 12 hours on your OLTP box, there's a very high probability that putting it all on another machine won't help at all.
It must be nice to have such money to burn.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 18, 2014 at 10:48 am
Hi Jeff,
The report is now taking more than 12 hr.
The report is executing one SP. The SP initially when tested 2 months it took around 10min to execute from SSMS. Now a days it is taking more than 1hr.
The SP has some while loops. Not sure how to avoid the while loop without effecting the business logic. Because when we add any new id it has to calculate from entire thing for that new id.
i.e. When we add any new id it will take another 15-20 min time more.
The amount of the data doesn't increase a lot but the execution time will be more and more even if you add one new id
That is causing an issue. Also the SP is using linked server. So obviously there will some delay. Totally all these factors causing performance issues
By using the readonly Replica in Always on at least we can reduce Network latency time.
November 18, 2014 at 11:23 am
Linked servers can do horrible, horrible things for performance beyond simple network latency (statistics issues, forced RBAR). But sometimes you're stuck with them, so you have to deal...
Anyway, the danger I think you are facing is that the idea that a new offloaded replica will make the report go faster is an assumption. The key I think is to identify why the query is slow, ie., what it is waiting on. What is the contention for resources? Are you monitoring blocking waits? If you're seeing huge amounts of blocking during report execution, then yes, you might be contending with the OLTP traffic. But if its a bad query design, or any number of other performance issues...running the same thing in a different place isn't going to guarantee you an improvement. In fact, many folks building out a replica of production would tend to underprovision it with ram/CPU/fast storage as compared to production, so you could theoretically be in a situation where it runs worse!
You should be able to analyze your wait stats and the query plan and determine what it is that is slowing this report query down...where the bottleneck is. If you can't pinpoint that...well, setting up and maintaining complex AGs and clustering is more complex than that, I would say. Jumping straight to always on, doing a big version upgrade, and signing up for extra Enterprise licenses to solve a slow report seems like cart before the horse. Those could end up being valid solutions but before shelling out that kind of expense and incurring the risk, you should be able to demonstrate that it will fix the problem. At least build out a test environment first...even without setting up AG/clustering, restoring it to an instance as a static copy and then running your report against it should at least tell you if contention with the OLTP is causing the slowdown. I just don't see that being very likely unless there is massive blocking constantly on the server, or your CPU is maxed out, or other much more severe performance issues.
YMMV!
November 18, 2014 at 12:29 pm
ramana3327 (11/18/2014)
Hi Jeff,The report is now taking more than 12 hr.
The report is executing one SP. The SP initially when tested 2 months it took around 10min to execute from SSMS. Now a days it is taking more than 1hr.
The SP has some while loops. Not sure how to avoid the while loop without effecting the business logic. Because when we add any new id it has to calculate from entire thing for that new id.
i.e. When we add any new id it will take another 15-20 min time more.
The amount of the data doesn't increase a lot but the execution time will be more and more even if you add one new id
That is causing an issue. Also the SP is using linked server. So obviously there will some delay. Totally all these factors causing performance issues
By using the readonly Replica in Always on at least we can reduce Network latency time.
You need to work on that code. If you can't avoid the while loop, hire a consultant that knows how to do it. I recently changed a code that ran for over 30 hours and didn't complete to something that runs in less than 5 minutes just by removing the while loops.
Adding more power won't help you if you don't fix the root cause.
November 18, 2014 at 7:24 pm
ramana3327 (11/18/2014)
Hi Jeff,The report is now taking more than 12 hr.
The report is executing one SP. The SP initially when tested 2 months it took around 10min to execute from SSMS. Now a days it is taking more than 1hr.
The SP has some while loops. Not sure how to avoid the while loop without effecting the business logic. Because when we add any new id it has to calculate from entire thing for that new id.
i.e. When we add any new id it will take another 15-20 min time more.
The amount of the data doesn't increase a lot but the execution time will be more and more even if you add one new id
That is causing an issue. Also the SP is using linked server. So obviously there will some delay. Totally all these factors causing performance issues
By using the readonly Replica in Always on at least we can reduce Network latency time.
Yowch. That's a whole lot of additional time just to add one new ID to the report.
If it were me, I'd start over and keep the "paradigm shift" in thinking that I have in my signature line below on your mind at all times. Apply a bit of "Divide'n'Conquer" knowledge (all-in-one queries or large queries can be a real killer) along with that and you can probably solve the problem. Yes, I realize that it might have some complicated business logic but, if you start thinking in columns instead of rows, you might just be able to suss it on your own. If not and if the report is so important that people are willing to spend all that money on new hardware, then I recommend you hire a T-SQL heavy hitter to solve the problem. That will still be less than the new hardware/associated licensing and the whole group might learn something in the process.
The most difficult part will be finding someone that lives up to the name "heavy hitter".
--Jeff Moden
Change is inevitable... Change for the better is not.
November 19, 2014 at 7:59 am
The most difficult part will be finding someone that lives up to the name "heavy hitter".
Actually that part is simple - I know a guy ... 😎
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
November 19, 2014 at 2:32 pm
The first thing that stood at at me was that you want to invest in new infrastructure rather than investigate the cause of report slowness. I do like the idea of offloading the source for reporting to its own warehouse. Any slow/badly written queries here will not affect your OLTP environment. But that is not where I would start here.
You mention in a post that the report ran fine in SSMS initially but degraded with time, not sure if you meant during report execution or SSMS. If report execution I immediately think parameter sniffing. Try this ....Say your procedure looks like
CREATE PROCEDURE myProc (@param1 int, @param2 varchar(30) )
AS
...
...
Change it to something like this
CREATE PROCEDURE myProc (@param1 int, @param2 varchar(30) )
AS
BEGIN
set nocount on;
declare @procParam1 int;
declare @procParam2 varChar(30);
set @procParam1 = @param1;
set @procParam2 = @param2;
/* substitute the local variables for the parameters in the rest of your code */
END
This suggestion might help. The way it is described though, I think there is more going on.
----------------------------------------------------
November 19, 2014 at 3:13 pm
That is not related to parameter sniffing.
Why because we are passing 3 input parameters. Those are default parameters. We are not changing either in SP or Report also.
Just tested the SP in SSMS
For 39000 records it took almost 14hr.
Is it common?
November 19, 2014 at 3:50 pm
ramana3327 (11/19/2014)
That is not related to parameter sniffing.Why because we are passing 3 input parameters. Those are default parameters. We are not changing either in SP or Report also.
Just tested the SP in SSMS
For 39000 records it took almost 14hr.
Is it common?
For 39000 rows, I'd expect any query to complete in less than a minute. Even in my laptop.
November 19, 2014 at 4:38 pm
Luis Cazares (11/19/2014)
ramana3327 (11/19/2014)
That is not related to parameter sniffing.Why because we are passing 3 input parameters. Those are default parameters. We are not changing either in SP or Report also.
Just tested the SP in SSMS
For 39000 records it took almost 14hr.
Is it common?
For 39000 rows, I'd expect any query to complete in less than a minute. Even in my laptop.
I don't think this is about the number of rows generated. What is important is how much work was done to create those 39K rows. If you have to table scan a billion fat rows and join that to 5 other large tables then it might take a while. Likewise if you are ITERATIVELY scanning large (or sometimes even small tables) you can get screwed. I have seen things in the real world that you ABSOLUTELY CANNOT FIX with bigger hardware!!! Yet it is SOOOO easy to make queries run 4-6 orders of magnitude faster (usually with very simple tuning) at virtually every client I visit. 🙂
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
November 20, 2014 at 1:25 am
ramana3327 (11/19/2014)
That is not related to parameter sniffing.Why because we are passing 3 input parameters. Those are default parameters. We are not changing either in SP or Report also.
Just tested the SP in SSMS
For 39000 records it took almost 14hr.
Is it common?
I've seen a query that returned < 100 rows take almost 4 days. After a rewrite of the query it took 15 minutes. Bad SQL code can cause major performance problems. The solution is to fix the code, not spend a fortune on hardware.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 16 through 30 (of 30 total)
You must be logged in to reply to this topic. Login to reply