Update Rate perfomance

  • I have a performance issue question with SQL Server.  Please bear with me as I'm not DBA and I've no idea if this is the best way to fix the issue I'm about to explain.

    I have an application that provides stock prices to 4 tables in an SQL Server database.  The tables have, for the most part, a fixed number of entries, namely all known stocks currently active across the major stock markets.  This amounts to some 5500+ records that are updated continuously during a trading day.

    During the course of the day we generally have a sustained update , note update, not insert, rate of about 250 updates per second.  This can easily burst to over 1000.  Each update is about 10 columns of data and 70 bytes per record.  If this 1000+ update rate is sustained for more than a few seconds the buffers in the application feeding the database begin in eat away system memory until we are left with no choice but to bounce the app. 

    Note: The application does not reside on the server and can be configured with up to 50 write threads to feed the database.  This however creates more lock waits than I care to talk about so we keep the number of writer threads to a minimum.  About 10-15 before the "fix" below.

    In an effort to deal with this bottleneck we added more memory to the server and installed software to create a Ram Disk. 

    We created a new filegroup (which resides on the Ram Disk) that contains the 4 tables and their associated indexes and Transaction Log.  We also moved the TempDB table and its associated Log file to the Ram Disk as well.

    After all is said and done we simply jumped our sustained update rate to 300 updates/sec from 250 and we still get backed up buffers.  We were, however, able to drop the number of writer threads to the database down to between 2 and 5 threads and still get the same results on the other side where the data is being viewed.

    Overall we did gain a performance improvement but not to the extent I'd hoped.  I was expecting my bottleneck to disappear in a blaze of RAM writes rather than RAID writes but I was not impressed. 

    Is there anything else I can do to improve the update rate performance?  Is 300 updates/sec the best we can manage given the sysem we have.

    Other Details:  Windows 2000 Adv Server running on Dual Xeon Processors @ 2.0 GHz w/Raid10 (Mirror/Stripe), SQL Server 2000, 2 GB Ram (upgraded from 1 GB), 700+ MB allocated to SQL Server (more than double what it had prior to the mem upgrade) 600MB allocated to Ram Disk

    Thanx in advance,

    Frank

  • Are there any indexes on the columns being regularly updated ? Are the columns being updated fixed or variable length ? Is the WHERE clause on each update statement referencing column(s) which are indexed and which the optimizer is actually selecting to use instead of a tablescan ?

  • All 4 tables have indexes. 

    All columns in all tables are fixed length. 

    The "where" clause in each instance uses 3 columns to uniquely identify a single record (Exchange, Session, and Symbol) and they are included in the index for each table.

    The terms "optimizer" and "table scan" are alien to me - I'll have to look that up.

  • >>All 4 tables have indexes. 

    But are any of the columns being updated, indexed ? When you update an indexed column to change the value, SQL Server not only has to write the data page, it also has to maintain the index page(s). If a column being updated is part of a clustered index, there is additional overhead because clustered index pages are in order.

    >>The "where" clause in each instance uses 3 columns to uniquely identify a single record (Exchange, Session, and Symbol) and they are included in the index for each table.

    Maybe post the DDL for 1 of the tables & its indexes, along with a sample SQL update statement.

    >>The terms "optimizer" and "table scan" are alien to me - I'll have to look that up.

    To update a table row, requires locating the existing row. The Sql Server optimizer determines the best method to locate the row, either by scanning the entire table looking for it, or using an appropriate index. You can determine what it's doing by pasting a sample update statement into Query Analyser and hitting CTRL-K to analyse the query plan.

  • Please be sure to post the DDL for 1 of the tables & its indexes, along with a sample SQL update statement.  Does the table have a unique clustered index ?

    From your original post:" Other Details:  Windows 2000 Adv Server running on Dual Xeon Processors @ 2.0 GHz w/Raid10 (Mirror/Stripe), SQL Server 2000, 2 GB Ram (upgraded from 1 GB), 700+ MB allocated to SQL Server (more than double what it had prior to the mem upgrade) 600MB allocated to Ram Disk"

    You indicate that the disk drive are RAID-10 - but it is recommend that the transaction log needs to be on its own set of dedicated drives.  I have seen update thruput increase by a factors of 4 to 6 when the transaction logs are isolated from all other files.

    A minimal configuration might be:

    Partition C:  Disk 0 and 1 in a RAID 1 - for OS, Swap, SQL Server software and databases master model and tempdb.

    Partition E: Disks 2 and 3 in a RAID 1 - for the user database data file which have a .mdf extention.

    Partition F: Disk 4 and 5 in a RAID 1 - for the user database transaction log file which have a .ldf extention.

    If you can add additional pairs of disks, benchmark two alternatives:

    1.  Move the non-clustered indicies to the new pair of disks.

    2.  Move tempdb to the new pair of disks.

    SQL = Scarcely Qualifies as a Language

  • Just another idea:

    - have you checked that your network isn't the bottleneck?

    - how do you send the new data to the server: for each statement an server roundtrip or do you make batch-update/inserts?

  • Looking deeper I found the updating scheme was a little more involved.  The feeder application reads in a table that enumerates all the stock symbols and provides a unique integer 'Id'.  This particular table is managed manually.  I assume it exists to simplify searching since now it requires only 1 column rather than the 3 I thought were being used.  For the record, I don't agree with this implementation but I have no control over that.

    What follows is the DDL of the table I previously alluded to that has the high update rates.  Following that is the stored procedure that gets called by the feeder application that performs the actual update.  As you can see there is only 1 unique index on 1 column ('Id') and it is strictly used for lookup.

    The filegroup '[RAM]' refers to the 600MB Ram Drive created to hold this table and it's index as well as the 3 other tables and their indexes.  These 4 tables do not exceed 35MB. The Transaction log for the entire database (which includes these 4 tables ) is set to 300MB and also resides on the Ram Drive.  I have never seen the Log file grow beyond 30 MB.  And just FYI the entire database is about 22GB.

    CREATE TABLE [dbo].[ISymbolX] (

     [Id]        [int] NOT NULL ,

     [Exchange]  [int] NOT NULL ,

     [POpen]     [float] NOT NULL ,

     [High]      [float] NOT NULL ,

     [Low]       [float] NOT NULL ,

     [QuoteCon]  [char] (4) NOT NULL ,

     [TickTrend] [char] (8) NOT NULL ,

     [Bid]       [float] NOT NULL ,

     [Ask]       [float] NOT NULL ,

     [BidSize]   [float] NOT NULL ,

     [AskSize]   [float] NOT NULL ,

     [TradeDate] [int] NOT NULL ,

     [TradeTime] [int] NOT NULL ,

     [ORHigh]    [float] NULL ,

     [ORLow]     [float] NULL ,

     [ORClose]   [float] NULL ,

     [ORVol]     [float] NULL ,

     [LastHrOpen][float] NULL ,

     [VolAt3]    [float] NULL ,

     [TStamp]    [datetime] NOT NULL

    ) ON [RAM]

    CREATE UNIQUE INDEX [IxId] ON [dbo].[ISymbolX]([Id]) WITH FILLFACTOR = 90 ON [RAM]

    /******************************************

     * Stored Procedure that makes the update *

     ******************************************/

    ALTER  procedure PUpdRTISymbolX

      @id int, @exchange int, @popen float, @high float, @low float, 

      @quotecon char(4), @ticktrend char(8), @bid float, @ask float,

      @bidsize float, @asksize float, @tdate int, @ttime int

    AS 

     update ISymbolX

     set Exchange= @exchange,

      POpen  = @popen,

      High  = @high,

      Low  = @low,

      QuoteCon= @quotecon,

      TickTrend = @ticktrend,

      Bid  = @bid,

      Ask  = @ask,

      BidSize = @bidsize,

      AskSize = @asksize,

      TradeDate = @tdate,

      TradeTime = @ttime,

      TStamp  = GetDate()

     

     where Id = @id

    if( @@rowcount != 1 )

    begin

     raiserror('error : failed to update ISymbolX record : Id = %d',16,-1,@id)

    end

    return 0

    /************************

     * End Stored Procedure *

     ************************/

  • 1 thing that jumps out is the lack of clustered index - so although a fillfactor of 90 is being specified, it only applies to the relatively small index on ID, and does nothing to prevent the table itself from becoming highly fragmented over time. You also incur unnecessary IO, because locating a data page (assuming the index is even used) requires reading the index, then doing a lookup for the data page.

    Try re-creating the index as clustered on ID.

    Do you think the RAM disk is actually helping ? What if you get rid of the RAM disk for the data, but use DBCC PinTable to hold the heavily updated tables in memory ?

  • We make an ODBC connection from the feeder app to the database.  The network for the database is isolated and is a 100Mb/s.

    As near as I can tell the feeder app does not send in batch updates, but does what we call price compression.  That is - it queues up "dirty" stocks (stocks whose data has changed since last db update). While in the queue, inbound data may continue to update the "dirty" stock in the feeder app's datastore.  When it's time to go to the database the application sends the most recently received data and then marks the stock "clean".

    Even with this type of optimization we still get hundreds and even thousands of updates per second.

  • Grasshopper,

    I know from the Purchase Order that the machine contains 4 38GB SCSI drives.  Windows reports the partitions as:

    Dell Utility Partition: 30MB

    C: ~11 GB

    D: ~57 GB

    I have no idea how to differentiate specific drives on a RAID system but the point is moot anyway as I've already been told that reconfigureing the machine is not an option.

  • Veteran,

    Just did some reading on Pinned tables and clustered indexing.  Thank you for the information - it is definitely something to try.

    As for whether or not the RAM disk is actually helping - yes - and no - we have better performance - slightly - but the main issue still exists, that of not keeping up with the data rate - so it's not a solution. 

    However, combining it with a clustered index and table pinning may well show a serious improvement.  I'll let you know.

  • Try to change FLOAT with a DECIMAL that meets your requirements. (see FLOAT type)

    Put this table in a separate file from the rest of the table (on a separate disk controller) so the reads of other tables won't affect the writes of this table which seems to be the bottleneck. (see Physical Database Files and Filegroups) Also try to put the LOG file on another DiskController too.

     

    Hope this help a little : )

     

     


    Kindest Regards,

    Vasc

  • Frank, have you run performance monitor on the server to check out your resource utilization?

    I am not familiar with how well SQL will use a RAM drive so I can't speak to that.  But, I would consider PW's suggestion about DBCC PinTable.

    Also, based on the fact that you have 4 hard drives with questionable OS/transaction log/table separtion, AND the fact that you have only 700mb of memory dedicated to SQL server, there is a chance your are I/O constrained.

    I would definitely look into your drive utilization, and memory utilization on that server.  If you can, post your findings for more suggestions. (Although if you are constrainted suggestions may primarilty be "buy more RAM or hard drives").

    Good Luck

  • Even too much memory can slow down!

    see: Identifying Bottlenecks

     


    Kindest Regards,

    Vasc

  • Yes, many things can cause performance problems.

    Familiarizing yourself with current hardware utlization metrics is a worthwhile exercise  - and the server does so much of the work for you.

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply