SQLServerCentral Article

A Bit About 64-bit

,

I recently attended a seminar at MS NYC HQ, discussing in-depth, the release of Microsoft's powerful 64-bit SQL Server, and wanted to share my thoughts with your readers. In what is being described as a watershed event, Microsoft is clearly aiming its sights on the enterprise class customer, and to finally quash the notion that SQL Server doesn't scale. Indeed, the release of 64-bit versions of Windows OS, Itanium-2 processor, and SQL Server, is an absolute trifecta that is sure to give ORACLE and DB2 on the UNIX platform a run for its money.

So, what does this mean to the average SQL Server customer and MS SQL DBA? Well, maybe nothing in the short-term, but in the long-term, 64-bit will someday become the norm, and its merits will be discussed here. At least for now, 64-bit will give us the ammunition to fully take on the UNIX SA's in the on-going debate of UNIX vs. Microsoft. But, I digress.

Meanwhile, back at the conference, "scalability" and "availability" were no longer just buzzwords, but actual concepts fully implemented, tested and benchmarked by the TPC-C (Transaction Processing Performance Council). In fact, the highest scores put out by DB2 and Oracle on UNIX, were trounced by SQL Server 2000's 64-bit architecture. (See this MS press release, with links to TPC-C)

Now, on to the highlights of the presentation. One of the more applicable discussions was the comparison between the current 32-bit platform and the new 64-bits. This article will focus on the specific advantages 64-bit has to offer SQL Server 32-bit customers.

The presentation began with a question - "Why 64-bit?" In talking about the challenges of a 32-bit environment, the speaker seems to take aim at the typical application development issues that often plague performance. Although no excuse for poor code, 64-bit may actually compensate for some of these inefficiencies. Among these implications, are:

  1. 9 out of 10 times the database is blamed for slow performance;
  2. Applications are bound to have inefficient code;
  3. Developers will always write un-optimized queries;
  4. As soon as one bottleneck is resolved, another one pops up.

Indeed, while 32-bit scalability scales well in most large transaction and data warehousing applications in areas such as supporting up to 32 CPU's and 64 GB of RAM, there are definitely limitations in the way it actually uses these vital resources. And that is precisely what 64-bit promises to solve.

For example, in the 32-bit SQL world, the maximum memory supported is 4GB. Out of this, the OS/Kernel reserves 2GB, and so SQL Server in reality only gets the other two. By setting the 3GB switch in the boot.ini, you can let SQL Server use 3GB, and the Kernel will get 1 GB.

Furthermore, in order to allow SQL Server to use up to 64 GB, one must configure the PAE switch in the boot.ini, as well as enabling AWE (Address Windowing Extension) using sp_configure. However, even this does not maximize the efficiency of your memory. Case in point, the additional RAM becomes available only to your buffer cache, and not your procedure cache, which is limited to only 4GB per instance. There are other 32-bit limitations affecting memory usage (which I won't go into detail), such page life expectancy, and memtoleave (used for out of process memory requests). So, basically the distribution of RAM in the SQL Server memory pool, is not "real" memory, and as some real-world memory stress tests have shown, can lead to memory contention, lower cache hit-ratio, and of course, slower performance. These limitations are no longer existent in the 64-bit world, and AWE is not used.

Another important measure is CPU utilization, and getting the best performance ROI as well as increased throughput. One of the terms that I became intimately familiar with at the presentation was "context switching" Although you may or may not have frequently heard this term, it is extremely vital to the overall performance of your SQL Server. A typical context switch occurs when the OS or the application is forced to change the executing thread on one processor to another thread executed on another processor. The higher the CPU usage is, or I/O contention, the higher the context switching, and ultimately lower throughput. The number of context switches should be as small as possible.

The worst thing that can occur is when you have your OS resources dedicated to excessive context switching, than doing the actual work required by SQL Server. Because it runs as a single process, you end up with increased I/O queues and thread swapping. This leads us to discuss a little about parallelism.

Although SQL Server does a good job of dynamically managing most resources, when you get into more high transaction systems, and require a higher degree of parallelism, you start getting into more advanced levels of configuration. You then need to become familiar with terms such as: (see this KB article for more info: KB Article - 319942)

  1. Affinity Mask
  2. Lightweight Pooling
  3. Max Async IO
  4. Max Worker Threads
  5. Memory
  6. Priority Boost
  7. Set Working Set Size

With 64-bit, you increase CPU parallelism, and it is possible to achieve true parallelism for multi-threaded applications. While context switches are not avoidable on multi-processor machines, it is less of an issue with 64-bit architecture.

To give a practical application of the type of environment most suited for 64-bit SQL Server is one where multiple instances are running on a single server. Since there is only so much hardware you can throw at it, and still not get optimal performance, this new architecture may be right for you. In fact, I highly recommend it.

Take for example, where many companies are now consolidating their SQL Server infrastructure and employing them on shared SQL Farm. Currently, I am at a client that is doing precisely this, where they have invested heavily in Hardware, SAN storage, upgraded all the SQL Servers to version 2000, and even moved the server OS to Windows 2003. The SQL Farm is set up as an Active/Active 2-node cluster, which maxes out at eight active instances per cluster (four active per node), with 8 CPU's and 8 GB of RAM. So, if one node fails over, the other node picks up the slack of the other four instances. My client is currently in the process of testing their proof-of-concept server. Now, as anyone knows, this is quite a task, even as it employs a third party product to manage the CPU resources and avoid contention. Nonetheless, they have statically allocated a portion of RAM to each instance, and are struggling with CPU affinity.

Some applications are memory intensive, some are OLTP, and others are OLAP, and vary up and down the spectrum in terms of size, transactions, etc. - a potpourri of application databases that need to play nice with each other (not too mention the users). Indeed, they seemed to have taken the 32-bits, a bit too seriously, without considering upgrading to 64-bits. This is what I have recommended before they go live. (Whether they listen is another thing, so I hope that they'll read my article!)

Among the general pitfalls of running multi-instances on 32-bit are, instances will compete for resources, application usage is dynamic and user load is unpredictable, a single runaway query can cause all instances to suffer, and the need to manually configure memory and CPU resources per instance.

Since SQL Server works best with dynamic configuration, multiple instances will benefit enormously from 64-bit architecture. Without getting to deep into the OS side of 64-bit (i.e. Windows 2003 64 bit), the Windows System Resource Manager (WSRM) will effectively manage CPU and Memory dynamically, and in turn make SQL Server administration easier. Therefore, there will be no need to "affinitize" resources per instance. (And, think of all those thousands of dollars invested in the 3rd party resource manager.)

I only wish that these clients were there when the presenter presented the perfect scenario to go 64-bits, which is the multi-instance server farm.

The upgrade path to SQL Server 64-bit, from 32-bit databases, is in itself simple as a backup and restore. (Of course, the hardware and 64-bit OS must be in place first.) Data warehousing, high transaction OLTP, and large e-commerce applications are among the beneficiaries of this new 64-bit technology as well.

On the flip side, some of the reasons listed to stay with 32-bit are as follows:

  1. Queries are not memory intensive, and are optimized;
  2. High levels of CPU parallelisms is not required;
  3. Maintenance window is fairly flexible;
  4. Large-scale consolidation is not required.

Finally, 64-bits is an extraordinary foray into the world of high-end, multi-threaded, ultra speed processing power that is intended for high availability, performance and scalability. So what about the bottom-line for businesses considering their budget and technology planning? Microsoft believes that the release of 64-bit SQL Server, combined with Windows 2003 64-bit and the I2 processor, will reduce the total cost of ownership (TCO).

In addition, 64-bits will enable consumers to consolidate multiple terabyte databases in a multi-instance environment, with no performance degradation. In turn, licensing fees and management costs will be reduced. While the need for such robust technology currently defines the enterprise class client, it is sure to become as widely used one day by customers large and small.

Written by: Robert Pearl, President

Pearl Knowledge Solutions, Inc.

http://www.pearlknows.com

Copyright © 2004 - All Rights Reserved.

Note: Not to be reprinted or published without express permission of the author.

Rate

4 (6)

You rated this post out of 5. Change rating

Share

Share

Rate

4 (6)

You rated this post out of 5. Change rating