• I wouldn't worry too much about it. While all a client's rows may be logically located together, after these page splits they'll be physically dispersed on disk and in less-than-full pages so you're liable to take a hit on performance doing random I/Os when working with that client, but it sounds like you're system is handling the results of an approach like this. What percent of fragmentation are you seeing in your clustered indexes after one day of normal business? Does the page density change at all? I think if your reads are fast, and you're primarily doing reads, and you keep up with your index maintenance you'll be fine. What are you using to do index maintenance and how often do you run it? How often are the clustered indexes rebuilt?

    I'll have to look into this. We have several customers for 'this' database. Some customers will do pre-active maintenance, others probably do not.

    So I'll have to look into this. It is very possible that there are databases still on SQL-2000 where indexes (and clustered tables) have never been rebuild and that have been running over 10 years now.

    At the time of design (SQL-7) and with the cheaper editions of SQL-server the possibilities for maintenance were limited. Also the our knowledge about maintenance was limited at the design time (10 years ago). We did have experience with and trust in B-tree's, so our choices were based on that.

    But when I get the oppertunity I will have a look in the databases running at customers sites. Probably do some DBCC checks and maybe some other checks. At the moment I do not have access to the databases at customers sites.

    (I could add some maintenance scripts with the next update script, update scripts are run by the customer, so normally I do not get results back, except when something goes wrong, but that happens only very rarely:-))

    Thanks for your tips and pointers,

    Ben Brugman