SQLServerCentral Article

Standing Back And Looking At Benchmark Results

,

Once again Microsoft have topped the charts in terms of database performance

figures, clocking

up 262,243 transactions per minute in standardised

tests.

The numbers look good – I’d love to see the kit I look after now chalk up

results like these, but could I really do it? Do these figures really mean

anything to Joe-Average DBA?

Well, for starters the figures were attained on a beta

copy of SQL Server 2000, and I would not fancy running my mission-critical

apps on beta software, much as I have found the beta version I am currently

playing with to be both stable and impressive. The Release version (due very

soon by all accounts) will no doubt perform to similar standards, but you cannot

buy it quite yet, and when you can, would you want to be first to put it live?

What about the hardware used for the tests? Well, the figures were not

exactly attained on the kind of souped-up super-server you want under your desk

but your manager will not sign off on – they was achieved using twelve

of them. Twelve Compaq

8500 servers, all sharing the load between them. Each of these servers were

stuffed with eight gigabytes of RAM and eight 700 Megahertz Xeon CPUS.

By my reckoning 8 x 12 = 96. Agreed? Then we are looking at a cluster of kit

containing 96 Gig of RAM and 96 CPUs. My calculator says that 96 x  700 =

67200, so we are also looking at 67 GigaHertz of raw processing power.

The load was "balanced" using the new "Federated

Database" technology included in SQL 2000. Put

simply, databases, and even individual tables are distributed across multiple

servers – imagine having a database with customers whose names begin with

A, B or C on one server, D, E and F customers on another server and so on, but

accessed through an abstraction layer that makes it look like you only really

have one big database on one big machine – that’s what federated databases

are about. Some people (Oracle for instance) have expressed reservations about

the reliability of this model, and it’s certainly not suited to every

application

And now the best bit – the bottom line. The bottom line is 5.3 million US

Dollars for that setup.

Except that that is not bottom line. SQL Server was not sharing all that

expensive RAM and CPU power with anything it didn’t have to. Supporting

applications, such as IIS, were run on separate servers – with no less than

three of these "ancillary" servers supporting each SQL Server – now

my elementary maths concludes that that makes 48 machines contributing in some

way or other to these impressive benchmark results.

Don’t get me wrong here – I am not knocking SQL Server, Microsoft or the

Transaction Processing Performance Council (TPC) – these figure are still

impressive. Everybody plays by the same rules when they go to the Benchmark

Olympics – everybody brings as much kit as their software will run on, and

nobody pretends - at least not very hard - that the tests accurately mirror

real-world applications or hardware configurations, and everybody squeezes every

last ounce of performance, because winning is important. However, unless your

employers have a serious hardware and software budget, don’t expect to be

putting a quarter of a million transactions a minute through your own servers

any time soon.

About the author

Neil Boyle is an independent SQL Server consultant working out of London,

England. Neil's free SQL Server guide is available on-line at http://www.impetus-sql.co.uk

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating