Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 

Look at the size of my BCHR

By Tony Davis,

Trawling the SQL Server blogosphere, I often encounter references to the Buffer Cache Hit Ratio (BCHR). Its mention will almost inevitably be closely followed by a statement to the effect that "If yours is smaller than 90%, you have problems" or "Mine is 99.913%, which is excellent".

It got me to wondering if, generally speaking, SQL Server DBAs are prone to going around boasting about the size of their BCHR. Unfortunately, like boasting about the size of your car, I doubt that having a big BCHR holds any real significance, beyond creating a warm feeling of self-satisfaction in the owner.

I have heard many compelling arguments that have more or less convinced me that the BCHR is meaningless as an indicator of an efficient or inefficient database. Put simply, the BCHR is a measure of how often SQL Server gets data from memory (buffer cache) as opposed to disk.

So say we have a 99.9999% BCHR and a well-performing database. Most DBAs would, with some justification, feel that this indicates a highly-efficient, trouble-free system, where most data access is being happily served from memory rather than costly disk reads.

Now say it suddenly drops to 85% and stays there. Does this indicate a problem? Some DBAs would say yes, and devote considerable time to rooting out that problem, and finding ways to boost the BCHR back up to a reassuringly high number.

The trouble is that the BCHR, like many similar ratios, is a database-wide average and the devil is always in the detail. A drop in BCHR does not necessarily indicate a problem. It indicates that something changed, and nothing more. Yes it could indicate that, for example, a query previously served by an index is now doing table scans, causing more physical reads and a drop in the BCHR. But, equally, it could actually mean that the database is running more efficiently. For example, if a query that was previously performing a nested loop operation, reading the same cached table over and over again, suddenly adopted a more efficient path, then this could decrease the BCHR but improve performance!

In fact, a change in BCHR could mean anything and is therefore, it could be argued, meaningless. Consider, for example, Connor McDonald's PL/SQL routine, choose_a_hit_ratio which can give you any BCHR you desire, just by adding or removing useless workload to your server (I really must translate that to SQL Server at some point. Any volunteers?).

I'd be interested to hear people's thoughts on the BCHR and if and how they use this metric in their performance monitoring.

Cheers,

Tony Davis.

Total article views: 277 | Views in the last 30 days: 1
 
Related Articles
FORUM

Efficiency

Comments posted to this topic are about the item [B]Efficiency[/B] I have noticed the same, on me bu...

FORUM

Pulling multiple records efficiently

need some advice in ways to pull records efficiently

SCRIPT

Tables, Queues, and Indices as a Percent of DB Size

Two views that give you a variety of stats, sortable as a percent of DB size, for tables, indiceses,...

FORUM
FORUM

Could not run BEGIN TRANSACTION in Database

Could not run BEGIN TRANSACTION in Database

Tags
 
Contribute

Join the most active online SQL Server Community

SQL knowledge, delivered daily, free:

Email address:  

You make SSC a better place

As a member of SQLServerCentral, you get free access to loads of fresh content: thousands of articles and SQL scripts, a library of free eBooks, a weekly database news roundup, a great Q & A platform… And it’s our huge, buzzing community of SQL Server Professionals that makes it such a success.

Join us!

Steve Jones
Editor, SQLServerCentral.com

Already a member? Jump in:

Email address:   Password:   Remember me: Forgotten your password?
Steve Jones