In a desktop application I helped write, we found that the normal user workflow caused the app to hit the database multiple times for the same data, as users would flip back and forth between the various screens. It was painfully apparent to them that we had a lot of data to pull.
We came up with the idea of a Database Agent dll. It's only purpose was to query the database for all the data they needed, right at the start. Then as they flipped back and forth, all the data was right there in memory. At the end, they could push save and it we would do one mass update.
Although our memory footprint grew substantially, the response time between screens dropped to almost nothing. Bottom line...customers happy.
Performance is all about perception. I like to use typed data sets to collect several database changes and submit them to the database all at once on a single connection and in a single non-distributed transaction. It requires that more data be available on the client than one might normally like but if the perceived performance is better or transactional integrity demands it then it's what you need to do.
[font="Tahoma"]Bryant E. Byrd, BSSE MCDBA MCAD[/font]
Business Intelligence Administrator
MSBI Administration Blog
This is a good article and a good advice. Now about roundtrips:
There are at least 2 network trips: From a Client to a Web Server and From the Web Server to the SQL Server if both are not on the same machine. If they are on the same machine then we should not concern ourselves with performance because it is possibly a small installation.
I agree with Steve Jones about doing intesive processing on the Web Server. I mean processing that could not be done at the back end in Stored Procedures. This is true unless there is a high volume of user requests But on the other hand, it is probably cheaper and easier to set up and administer a web farm (We do not pay extra for IIS) then running 2 SQL Servers Enterprise Edition or running 2 clusters.
So I vote for processing things on the Web Server. Does not mean you should not minimize roundtrips if you can and combine queries.
Hall of Fame
there is also an issue where in an asp application some DBA's write long chunky S_procedure and the asp applcation in the client does nothing.
So when u request the page it takes ages to load and i find it difficult to explain sometimes.Especially when the DBA doesn't have any experience in server side technologies.
So i guess to reduce round trips resources should be shared equally and also depending on the scenario.
And not just packing everything to stroed procedures!
Always good to keep this tip in mind ... I'm a SQL Server DBA, but have had to develop against other data sources using ADO & OLEDB providers, such as Oracle.
I've also found from experience that before you get too far down the road in a project, that it is a good idea to write a small script or stub program to make a connection to the data source and loop through the provider connection properties, and then grab a small recordset and inspect the recordset properties. (You can also do this with Excel in the VB debug environment, where the properties are displayed for you). From there you can see such things such as Multi-RecordSet support or Paging support.
From these two things you can avoid a lot of assumptions about what ADO features the provider supports.
I don't know about round trips, but you what you often want to reduce is selecting a million rows from a remote server, bring it to the client, or another server, doing all the work there, and handing one nice shiny row back to the user. That scenario is far more problematic for us than the number of round trips. You also need to take resource locking and isolation level into account when you're doing everything in one trip.
Viewing 6 posts - 16 through 20 (of 20 total)