• First, cluster the tables for best (overall) performance. In particular, don't assume that every table should be clustered by identity. That's most often simply not the best way to organize data tables.

    Try the structure below, especially given the current performance issues. Yes, there will then be multiple INSERT points for the transaction tables below, but each insert location will match the order needed for the related product id. You can reorganize or even partition the underlying tables if/as needed to accommodate the new clustering.

    Also, consider adding a "confirmed" quantity amount and datetime in the product "master"; then, you only have to add rows from that time forward to get a current total. The confirmed amount could be the result of a physical inventory or simply a spot-check for that one item. On heavily used items, you might automatically compute a confirmed quantity every, say, 24 hours.

    Product:

    * product_id INT (PK)

    * product char(10) NOT NULL

    * quantity_confirmed decimal(14, 4) NULL

    * quantity_confirmed_type tinyint NULL /*code indicating type of confirmation: 'Physical Inventory', 'Checked by Employee', etc.*/

    * quantity_confirmed_datetime NULL

    * quantity_confirmed_identity NULL

    Product_Status:

    * product_stat_id IDENTITY(1, 1)

    * product_id int FK

    * quantity decimal(14, 4) NOT NULL

    * quantity_datetime NOT NULL

    PRIMARY KEY( product_id, quantity_datetime, $IDENTITY /*just to **guarantee** uniqueness*/ )

    Product_Transactions:

    * product_transaction_id int IDENTITY(1, 1)

    * product_id int (FK)

    * transaction_datetime NOT NULL

    * quantity decimal(14,4) NOT NULL

    PRIMARY KEY( product_id, transaction_datetime, $IDENTITY /*just to **guarantee** uniqueness*/ )

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.