My client has a db that is 94% reads, 6% writes.
There are two tables that have ~100 million rows each, and are often joined. They are running SQL 2008 R2 Standard on Windows 2008 R2 Enterprise. Database size is ~300GB and expected to grow. Statistics are updated with fullscan nightly. Running on a Dell 720 with 128GB memory, with 64 GB allocated to SQL Server.
I have warned management for quite a long time they can't just keep throwing rows into these tables, and expect that things will remain the same. Their response is that there are much larger databases out there that hold a lot more rows, and that they don't have the type of performance issues that the client is beginning to experience. My reply has been that those shops are not likely running SQL Standard, and that they are probably using some form of partitioning.
Queries that generate sub-optimal plans are becoming more frequent, causing index scans of one or both of these tables. These are single-statement queries that are generated by a framework, so are not easily optimizable unto themselves. There are predicates that should filter the query, but the predicate appears to not be pushed down.
My fear has been that as rows accumulate in these tables, statistics are less and less meaningful, causing the optimizer to choose a bad plan.
For those of you with large mostly read-heavy tables, what has been the max number of rows that you have in a single table before experience performance issues as described above? And how have you remedied the situation?
Thanks in advance--