RE:

  • This works fine if you have a low number of users. Unfortunately what you gain in flexibility you lose in performance as the number of users rises (or that has been my experience) and that performance hit is not trivial. If you as you say have 300 users then the central resource i.e. your database server has the cooperation of 300 cpus to handle the presentation end of the app - here you are pushing part of that down to the shared resource and your throughput will suffer.

  • I agree, you are definitely making a trade off.

    That trade off is essentially developer time for performance.  However, serving application based (internal) webpages is usually something where you can know about how many users to expect.  In my case, while we have a good number of users, we don't typically have many concurrent users - I'd say it would rarely go over 40.

    SQL Performance and web performance have not been an issue - so far  - with basic editing or listing of data.

    I have enountered performance problems with a crosstab/pivot table report design based on metadata, but a rewrite of the design solved that problem.

    I'd never recommend this approach for someone expecting several hundred or more concurrent users, but in smaller databases (tables < 1,000,000 rows) it is lightening quick.

    As a single developer I could never have coded pages for all the applications we are currently supporting - this approach allows me to implement and support more applications per $ spent.

Viewing 2 posts - 1 through 1 (of 1 total)

You must be logged in to reply to this topic. Login to reply