SQLServerCentral Article

Reducing Round Trips - Part 2

,

Last week I started discussing Reducing

Round Trips - a look at a ways to make your applications more responsive and

reduce both server and network load. It's a fun subject with plenty more to

explore. Frequent discussion area contributor Paul Ibison sent me his list of

techniques:

  • Using XML to return hierarchical recordsets, rather than

    running a query of each record of a parent recordset to get child records.

  • Using Set NoCount On will avoid an unnecessary trip to the

    client.

  • In ADO don't use the Refresh method of the Parameters

    collection - assign them explicitly.

  • Send in a pipe-delimited list of values as an argument to a

    stored procedure. The procedure can iterate through the list and perform the

    necessary queries.

  • In a middle-tier MTS object, use a cached recordset which

    can hold lookup values which hardly ever change. The middle tier object can

    be refreshed when needed, but basically clients can populate listboxes etc

    without querying the SQL server at all.

  • If using client-side cursors, use the UpdateBatch method of

    the Recordset object.

Not sure I agree with Paul entirely on the XML, I think using the shaped

recordset provider would be a lighter weight alternative - something we'll have

to put to the test soon! Before we move onward I'd like to encourage readers to

send me more ideas or comments. What works, what doesn't? Email me or just add a

note in the attached discussion forum so that all can participate.

I've been working on a new app for the company where I work (along with Sean

& Leon) and

one of our big concerns was reducing the traffic when everyone logs in.

Typically 8 am or so, break time around 10:15, lunch, then again around 3 pm.

We've got to bring over a bunch of pick lists each time. What's worse,

our app connects to any one of about 250 db's depending on what project is being

worked, which requires more new picklists and other supporting data.

We ended up doing a variation of Paul's MTS object idea above, but built the

caching into the object model that runs on the client. We ended up going with

something incredibly simple and damn robust; saving the recordset to disk

directly, like this:

rs.save "C:\cachefile.rs",adPersistADTG

This persists both the data and the metadata to disk quickly and easily. When

you need it, you just do rs.open filename and you've got our data back. We went

with the proprietary datagram format because it's a little smaller and it's

binary so less likely our users will find it intriguing to edit them in Notepad,

but you can also save them in the standard ADO XML format if you like. Either

way, it works, it's easy, and it saves a ton of code compared to writing your

own persistence code. Of course we had to come up with a naming convention and a

catalog - it's not unusual to have 20 or more files cached for 25 or more

databases on a single client!

With any caching scheme you've to consider how to handle changes to the data.

We wanted something simple. Is anything?:-) We actually ended up driving all our

picklists from a table that looked something like this:

PicklistName       StoredProcToPopulate Version
Contact.Country usp_StandardPicklist 'country' 19
Contact.Stateusp_StandardPicklist 'state'2
SICusp_SICPickList1

As you can see each picklist is populated by executing whatever text is

contained in the StoredProcToPopulate column. Normally these are just selects

from a master table, but occasionally we need some special case implemented and

for those we can just use a different proc entirely. Version is incremented by

both a trigger on this table and in some cases by triggers on the source tables.

When our app changes connections to a database we query our picklist table to

get the current version for all picklists - then we compare that to what we have

in our catalog and get new/missing ones as needed. As soon as we have them we

immediately write them to disk - ready for the next time we log in. What's cool

about this is that some of these lists are VERY static - the state lookup list

hasn't changed in a while. Think about how many bytes are saved in network

bandwidth by not retrieving this at each login. Then consider that some apps

might get the list every time they perform an action!

Is all that work worthwhile? I'm pretty sure it is, but I think we'll be able

to see much better once we deploy (next week if all goes well!) and we can do

some tuning, followed by some measuring. We're running on an 8 way box, I'm

hoping we can support a lot...I mean a lot!...of users! A better question is -

how much work is involved in caching? We built this from scratch - I'd say

between design, discussion, discussion, etc, coding, we might have spent as much

as 150 hours on this part of the project. Not a lot is it? Is it reusable? No,

not a 100%, but already we're seeing the advantages of using it and I think

we'll probably look at making it more reusable when we roll out v2 later this

year. And to give you an idea of how important we thought this was - that's

about 25% of the planned hours on the project.

Caching can be simple, complicated, dangerous, safe...even aggravating! But

it can really lighten the load. Have you tried it? How did you do it? Like it or

dislike it? Show me your comments!

Rate

5 (1)

You rated this post out of 5. Change rating

Share

Share

Rate

5 (1)

You rated this post out of 5. Change rating