July 31, 2019 at 3:55 pm
Executive summary: What SQL statistics would you be gathering now to demonstrate the overall improvement in the system? We have some monitoring in place, but I want your opinions/ideas without any of my influence.
Details: We have a front end DB (peaks at ~20k TPS during the 8-5 work day) by clients in a few SAAS apps (and in-house apps, reporting, etc). It has data for 'widgets' going back 15+ years which we have approval to clear anything older than 26months, which amounts to 6.4 million records involving 30 ish tables. It takes a ~210GB database down to ~80GB.
P.S. Please, no critique of our approach - even if it's 'wrong' or not the best, we've tested it 6 ways to Sunday, we are not altering it now. The cleanse is scripted, been refined and, running every night, pushing the cleansed DB to Dev, Test and a reporting server.
August 1, 2019 at 4:10 pm
Thanks for posting your issue and hopefully someone will answer soon.
This is an automated bump to increase visibility of your question.
August 2, 2019 at 3:50 pm
I mainly monitor waits and latencies using code developed from the following to provide baselines:
http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/
http://www.sqlskills.com/blogs/paul/how-to-examine-io-subsystem-latencies-from-within-sql-server/
If you have the money then a professional monitoring tool will provide more information. eg
https://www.red-gate.com/products/dba/sql-monitor/
https://www.sentryone.com/products/sentryone-platform/sql-sentry/sql-server-performance-monitoring
Viewing 3 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply