High TPS performance

  • I have an application that generates numerous messages at each stage of a process. The application handles multiple customers and there could be hundreds of messages generated per second per customer. There is a potential for the application to generate many thousands of messages per second.

    In order to handle the high TPS the application will automatically generate a new table as each customer comes on board. So, in the end if we have 500 customers we will have 500 versions of the same table. In addition to that, i want to have different file groups created on different spindles and the 500 tables distributed over those file groups.

    The reason for this design is to make sure we get the most TPS we can possibly get. If anyone has a better idea please let me know.

    I have two questions.

    How do i dynamically analyse the performance/load of each of the file groups and second if i am using just one stored procedure to populate the tables will that become a bottleneck. Should i have a different store procedure for each of the 500 tables?

    Thanks for any help with this.

  • Why not use a message queue and write async to the database once message has been pushed? this way you can still achieve massive throughout at the business layer without putting undue stress at the data layer.

    Jayanth Kurup[/url]

  • We are using message queues. There is concern the queues will get flooded.

  • In that case you can shift the write to DB in a Service Broker Queue which is more reliable and can be used for retry logic.

    Jayanth Kurup[/url]

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply