April 25, 2011 at 4:29 pm
Hi Gurus,
I have a database with 3800 tables and because of huge number we decided not use referential integrity between the table. the functional experts and DBAs used to send the required table information and keys to join the tables. in our application there are 6 modules exists. as the data grows in the database, maintaining the database becomes cumbersome job and the queries running slower. We have created indexes (cluster, non-cluster) still facing issue. Could you anyone suggest the best approach how I do I suggest my management a best solutions to overcome this problem.
Thanks,
Vidu.
April 25, 2011 at 4:58 pm
vidubhushan 49100 (4/25/2011)
Hi Gurus,I have a database with 3800 tables and because of huge number we decided not use referential integrity between the table. the functional experts and DBAs used to send the required table information and keys to join the tables. in our application there are 6 modules exists. as the data grows in the database, maintaining the database becomes cumbersome job and the queries running slower. We have created indexes (cluster, non-cluster) still facing issue. Could you anyone suggest the best approach how I do I suggest my management a best solutions to overcome this problem.
Thanks,
Vidu.
:blink: Redesign from scratch?
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
April 25, 2011 at 5:25 pm
So the short answer is hire someone to help. A short term consultant to help you debug and determine the issue. I can recommend a few that do remote work if you like.
When you say maintenance is not being done, what does that mean? Are you not updating statistics or rebuilding indexes? Both of those can cause data to slow as it grows in size.
Referential Integrity often isn't a speed issue, but an integrity issue. It helps you to determine that data in child tables has valid matching rows in parents. Or that you don't have duplicates. Avoiding this because of a large number of tables is ignorant. You are mis-understanding the implication or unaware of how much work is required.
April 26, 2011 at 6:00 am
Referential integrity is also a speed issue. It speeds things up. Here's a blog post outlining why[/url].
You need a fundamental assessment of the system done. It sounds like you don't have an indexing strategy beyond applying them to the pain points like band-aids. Your fundamental database design requires designing the clustered index as part of the initial table builds. I'd recommend getting a consultant in to help out.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 26, 2011 at 6:13 am
Grant Fritchey (4/26/2011)
Referential integrity is also a speed issue. It speeds things up. Here's a blog post outlining why[/url].You need a fundamental assessment of the system done. It sounds like you don't have an indexing strategy beyond applying them to the pain points like band-aids. Your fundamental database design requires designing the clustered index as part of the initial table builds. I'd recommend getting a consultant in to help out.
Now the real question becomes how do you figure if your system is read of write heavy?
April 26, 2011 at 6:17 am
Ninja's_RGR'us (4/26/2011)
Grant Fritchey (4/26/2011)
Referential integrity is also a speed issue. It speeds things up. Here's a blog post outlining why[/url].You need a fundamental assessment of the system done. It sounds like you don't have an indexing strategy beyond applying them to the pain points like band-aids. Your fundamental database design requires designing the clustered index as part of the initial table builds. I'd recommend getting a consultant in to help out.
Now the real question becomes how do you figure if your system is read of write heavy?
Monitoring. Which I'd wager cash isn't in place in this situation.
But you could just look at sys.dm_index_usage_stats and simply compare the numbers between scans, seeks, lookups and updates. If the first three outnumber the fourth, you have your answer.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 26, 2011 at 6:22 am
Grant Fritchey (4/26/2011)
Ninja's_RGR'us (4/26/2011)
Grant Fritchey (4/26/2011)
Referential integrity is also a speed issue. It speeds things up. Here's a blog post outlining why[/url].You need a fundamental assessment of the system done. It sounds like you don't have an indexing strategy beyond applying them to the pain points like band-aids. Your fundamental database design requires designing the clustered index as part of the initial table builds. I'd recommend getting a consultant in to help out.
Now the real question becomes how do you figure if your system is read of write heavy?
Monitoring. Which I'd wager cash isn't in place in this situation.
But you could just look at sys.dm_index_usage_stats and simply compare the numbers between scans, seeks, lookups and updates. If the first three outnumber the fourth, you have your answer.
Here's where I "think" I have a problem with this dmv. I have an ERP system which is extremely rbar. But I also have a reporting system that reads from the prod db. How can I weight in that 1 use of 1 report weights more than 1 use of the ERP? Most reports read 1000s of not 1 000 000s of rows while the ERP only reads 1 at a time.
April 26, 2011 at 6:26 am
Ninja's_RGR'us (4/26/2011)
Grant Fritchey (4/26/2011)
Ninja's_RGR'us (4/26/2011)
Grant Fritchey (4/26/2011)
Referential integrity is also a speed issue. It speeds things up. Here's a blog post outlining why[/url].You need a fundamental assessment of the system done. It sounds like you don't have an indexing strategy beyond applying them to the pain points like band-aids. Your fundamental database design requires designing the clustered index as part of the initial table builds. I'd recommend getting a consultant in to help out.
Now the real question becomes how do you figure if your system is read of write heavy?
Monitoring. Which I'd wager cash isn't in place in this situation.
But you could just look at sys.dm_index_usage_stats and simply compare the numbers between scans, seeks, lookups and updates. If the first three outnumber the fourth, you have your answer.
Here's where I "think" I have a problem with this dmv. I have an ERP system which is extremely rbar. But I also have a reporting system that reads from the prod db. How can I weight in that 1 use of 1 report weights more than 1 use of the ERP? Most reports read 1000s of not 1 000 000s of rows while the ERP only reads 1 at a time.
Oh, don't get me wrong. I'm not saying that's the only possible way of interpreting the data, but it's a way to get quickly into the ballpark, achieve a fast understanding. Yeah, to really know you have to do more and collect more data.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 26, 2011 at 6:30 am
Grant Fritchey (4/26/2011)
Ninja's_RGR'us (4/26/2011)
Grant Fritchey (4/26/2011)
Ninja's_RGR'us (4/26/2011)
Grant Fritchey (4/26/2011)
Referential integrity is also a speed issue. It speeds things up. Here's a blog post outlining why[/url].You need a fundamental assessment of the system done. It sounds like you don't have an indexing strategy beyond applying them to the pain points like band-aids. Your fundamental database design requires designing the clustered index as part of the initial table builds. I'd recommend getting a consultant in to help out.
Now the real question becomes how do you figure if your system is read of write heavy?
Monitoring. Which I'd wager cash isn't in place in this situation.
But you could just look at sys.dm_index_usage_stats and simply compare the numbers between scans, seeks, lookups and updates. If the first three outnumber the fourth, you have your answer.
Here's where I "think" I have a problem with this dmv. I have an ERP system which is extremely rbar. But I also have a reporting system that reads from the prod db. How can I weight in that 1 use of 1 report weights more than 1 use of the ERP? Most reports read 1000s of not 1 000 000s of rows while the ERP only reads 1 at a time.
Oh, don't get me wrong. I'm not saying that's the only possible way of interpreting the data, but it's a way to get quickly into the ballpark, achieve a fast understanding. Yeah, to really know you have to do more and collect more data.
So plan B would be? Collect a trace and sum up to reads / writes columns?
April 26, 2011 at 6:51 am
Ninja's_RGR'us (4/26/2011)
So plan B would be? Collect a trace and sum up to reads / writes columns?
I think I'd call it step 2 of Plan A, but yeah.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 26, 2011 at 7:00 am
Grant Fritchey (4/26/2011)
Ninja's_RGR'us (4/26/2011)
So plan B would be? Collect a trace and sum up to reads / writes columns?I think I'd call it step 2 of Plan A, but yeah.
Step 1 is enough for me... around 7.3 times more reads than writes... excluding the extra weights of the reports.
SELECT
DB_NAME(database_id) AS DB_NAME
-- , SUM(user_seeks) + SUM(system_seeks) AS Seeks
-- , SUM(user_scans) + SUM(system_scans) AS Scans
-- , SUM(user_lookups) + SUM(system_lookups) AS LookUps
, SUM(user_seeks) + SUM(system_seeks) + SUM(user_scans) + SUM(system_scans) + SUM(user_lookups) + SUM(system_lookups) AS Reads
, SUM(user_updates) + SUM(system_updates) AS Updates
, SUM(user_seeks) + SUM(system_seeks) + SUM(user_scans) + SUM(system_scans) + SUM(user_lookups) + SUM(system_lookups) - SUM(user_updates) - SUM(system_updates) AS [Reads - Updates]
FROM
sys.dm_db_index_usage_stats
--WHERE database_id = DB_ID()
GROUP BY database_id
ROLLBACK
/*
DB_NAMEReadsUpdatesReads - Updates
master33099358229517
msdb1072119310263761856
PROD13803877118802270119 236 501*/
April 27, 2011 at 10:59 am
Thanks all for your valueble suggestions. Seems I need to come up with more details on the database and statistics as you suggest. I will come back with all the db statistics and ask your suggestion again.
Thanks again for your time.
Regards,
Vidu.
April 27, 2011 at 11:12 am
Now that I think of it I actually like the trace idea better. It seems more accurate in my system.
Start default trace and import into table...
SELECT SUM(Reads) AS Reads, SUM(Writes) As Writes, SUM(Reads) / (SUM(Writes) * 1.0) As Ratio FROM dbo.RPT_Read_Write
Shells out a whopping :
ReadsWritesRatio
826406738050217.189671484888304862
It's official, I'm optimizing for reads :w00t:.
Hey Grant, I'm thinking about running that trace for a full day, but only tracing for audit logouts. Do you see any "lighter" trace to get those figures or any pitfalls of using that scheme?
April 27, 2011 at 11:29 am
I had a system where we ran a basic query trace 24/7 for years. Other than dealing with the data that came out of, there was no real issues there at all. We never had a problem caused by the trace. It gave me great confidence that a well-defined server side trace is a very safe operation. Go for it.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 27, 2011 at 11:41 am
Grant Fritchey (4/27/2011)
I had a system where we ran a basic query trace 24/7 for years. Other than dealing with the data that came out of, there was no real issues there at all. We never had a problem caused by the trace. It gave me great confidence that a well-defined server side trace is a very safe operation. Go for it.
Nice to know. I was more reffering to my solution to get the reads / writes info without having 1 Billions rows of data to trace. I didn't see what could go wrong so I asked :w00t:.
7 days trace in progress....
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply