table performance

  • I have a transactions table created in my Sql 2008 database which stores data entered by users from my web application. I have also provided a page in the web application where users can search for the data based on certain search parameters entered and I have written a select query for retrieving the data from this transactions table. Now the users use this search page very frequently and I have observed that insertion into the transactions table gets affected and gets slow.

    Is there a way I can avoid the search on the transactions table or any other feature available in Sql which will enable me to get better performance.

    Thanks.

  • Try query hint with nolock on your search query. so that no shared locks are issued against the table that would prohibit other transactions from modifying the data in the table.

  • I have used nolock also. still the slowness occurs sometimes.

    Thanks

  • Do you have proper indexes are there on table if yes then Analyze all your indexes to identify non-performing indexes and missing indexes that can improve performance. also check fragmentation level of indexes

  • viiki.seth (11/27/2011)


    Try query hint with nolock on your search query. so that no shared locks are issued against the table that would prohibit other transactions from modifying the data in the table.

    Enabling dirty reads like that in a query can have some pretty serious ramifications including the duplication of rows in a "before" and "after" state during modifications. My recommendation would be to avoid the use of WITH(NOLOCK).

    Besides, WITH(NOLOCK) won't help the table being INSERTed into. It will only help on tables being read from. In fact, IIRC, MS is making a change (2012?) where the use of such table hints on tables being INSERTed to or UPDATEd will cause an error.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I strongly vote against the NOLOCK hint!! It's not a "magic run-faster" option. For moredetails about the concequences search this site or google for "nolock hint dirty reads".

    My guess would be: due to an inefficient select statement the table(s) is/are locked causing the insert to wait for the select to finish. The insert itself is not slower, there's just more time to wait for the insert statement to start.

    I'd analyze the select statement and speed it up as much as possible.

    Maybe you're face with a "catch-all query". If that's the case, Gails blog[/url] might help you to understand it and find a way to improve performance.

    If you need help, please post some more details following the advice given in the second link in my signature.



    Lutz
    A pessimist is an optimist with experience.

    How to get fast answers to your question[/url]
    How to post performance related questions[/url]
    Links for Tally Table [/url] , Cross Tabs [/url] and Dynamic Cross Tabs [/url], Delimited Split Function[/url]

  • znkin (11/27/2011)


    I have a transactions table created in my Sql 2008 database which stores data entered by users from my web application. I have also provided a page in the web application where users can search for the data based on certain search parameters entered and I have written a select query for retrieving the data from this transactions table. Now the users use this search page very frequently and I have observed that insertion into the transactions table gets affected and gets slow.

    Is there a way I can avoid the search on the transactions table or any other feature available in Sql which will enable me to get better performance.

    Thanks.

    This is a timeless and common problem.

    Searches on the table likely don't have much of anything to do with your INSERTion problem. More likely, it has to with the indexes that have been built to support the searches. If the logical order of the INSERTs isn't the same as the logical order of the indexes, then you can get a large number of Page Splits on the Clustered Index and a large number of Extent Splits on Non-Clustered Indexes.

    Since you're using the same table for searches and a fair number of INSERTs, my recommendation would be to review the indexes on the table. The Clustered Index should have the same logical order as the INSERTs so that the new rows appear at the logical end of the Clustered Index which will virtually wipe out any Page Splits without having to decrease the FILLFACTOR on this very important index. Non-Clustered Indexes should be reviewed to make sure that they're actually being used (remove them if they're not being used) and that their logical order isn't too contrary to the order of INSERTion. Of course, any Non-Clustered Index that has very few unique values in the first column should generally be avoided. Usually, an alternate logical order for such indexes works just as well. And, of course, you need to work with the FILLFACTOR on such indexes to help avoid Extent Splits as well as having a decent Index Maintenance Plan.

    You might also be able to either "Partition the Table" (Enterprise Edition) or use "Partitioned Views" (Standard Edition) to further alleviate the problem because, behind the scenes, the tables and indexes would be split vertically.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • It’s a Performance Issue and I am not sure why nobody requested for DDLs & scripts with execution plans.

    @znkin: All the suggestions from SSC are good (but valid for some specific scenarios) and guys shared their experience here. It may / may not be your case. Please post DDLs & other pre-requites for our further analysis.

  • My guess is that the search query looks something like this:

    SELECT...

    FROM ...

    WHERE (@field1 = Field1 OR @field1 IS NULL)

    AND (@field2 = Field2 OR @field2 IS NULL)

    ...

    Do I win a prize? If so, read here: http://sqlinthewild.co.za/index.php/2009/03/19/catch-all-queries/

    If you have lots of those fields and/or many joins in the FROM clause dynamic sql can be a HUGE win here (like 4-5 orders of magnitude huge) - but be sure to guard against SQL Injection.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Thanks all for the replies...

    Apart from what was suggested by all would it help if I made a view and transferred all data in a separate table for the reports.(because right now too much activity happens just on 1 table).

    Would that be a viable option?

    Thanks again

  • znkin (11/28/2011)


    Thanks all for the replies...

    Apart from what was suggested by all would it help if I made a view and transferred all data in a separate table for the reports.(because right now too much activity happens just on 1 table).

    Would that be a viable option?

    Thanks again

    What about the SYNC / REFRESH of this table with base tables? How soon you would like the base table data to appear in you reports?

  • znkin (11/28/2011)


    Thanks all for the replies...

    Apart from what was suggested by all would it help if I made a view and transferred all data in a separate table for the reports.(because right now too much activity happens just on 1 table).

    Would that be a viable option?

    Thanks again

    There are lots of ways you could make a "reporting copy" of the data. That "COULD" allow for faster queries, and "COULD" cause less blocking (if that is happening). But it will NOT necessarily reduce the load on the server or make the reports run more quickly. it could in fact do just the opposite because you could have a complete copy of the data blowing other more useful stuff out of RAM - which can KILL performance especially if you are like 99.994% of my clients and you have a poor IO subsystem.

    I note that you have still not given us the table/index structures, query plan(s) nor even the report query code. Help us to help you. Or get a professional on board to do the work (and mentor you in how to do it while he/she is at it).

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • znkin (11/28/2011)


    Thanks all for the replies...

    Apart from what was suggested by all would it help if I made a view and transferred all data in a separate table for the reports.(because right now too much activity happens just on 1 table).

    Would that be a viable option?

    Thanks again

    A view is just another query against the table so the only thing that may change is that your performance may go down for searches because a lot of well meaning folks don't understand that because of the "all in one" nature of views, you may be missing some serious performance gains in the searches by using a "Divide'n'Conquer" stored procedure instead.

    Besides, as I stated before... I don't believe the search code has anything to do with your insert problem.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • znkin (11/28/2011)


    Thanks all for the replies...

    Apart from what was suggested by all would it help if I made a view and transferred all data in a separate table for the reports.(because right now too much activity happens just on 1 table).

    Would that be a viable option?

    Thanks again

    The performance depends on several factors & it can't be solved until you don't provide useful information regarding the problem. It may happen because of poor I/O subsystem or you may be missing required indexes anything. Would you please provide the table structures (row count as well) & execution plan of the query ??

    By the way, what type of waits do you see in sys.dm_exec_requests when the performance is slow ?


    Sujeet Singh

  • Thank you everyone for your suggestions.

    Below is the DDL of the table in which I insert and search. This table currently has 5,10,278 rows(half million) and records are inserted daily.

    Actually my issue here is that apart from the insert now I have given a search page where the user can enter based on the API_TM_Valuedate and API_TM_Number columns (both have been indexed). and during peak hours when insert and search is heavily used at times we have slowness in the search output.

    Also I plan to develop more features for the users where they can download data from this table via a web page (which again would be a select query based on date and some other column, depending on the requirement) . If you notice there is a 'master' keyword in the tablename now because in future I could have another table which would have a primary/foreign key relationship with the current table on the API_TM_TrackId column. So then in the search query I would have to have a inner join etc.

    So keeping all this in mind my broader question was that am I doing the correct approach with my database design & queries because everything(select & insert) is just to just hit this big table very very regularly.

    CREATE TABLE [API_Track_Master](

    [API_TM_TrackId] [bigint] IDENTITY(1,1) NOT NULL,

    [API_TM_Number] [varchar](100) NOT NULL,

    [API_TM_UpperId] [bigint] NULL,

    [API_TM_Tracker] [int] NOT NULL,

    [API_TM_TrackerType] [int] NOT NULL,

    [API_TM_TrackBy] [varchar](50) NOT NULL,

    [API_TM_Date] [datetime] NOT NULL,

    [API_TM_Status] [char](2) NOT NULL,

    [API_TM_Valuedate] [datetime] NULL,

    [API_TM_CategoryId] [char](10) NULL,

    [API_TM_FileName] [nvarchar](500) NULL,

    CONSTRAINT [PK_Trackid] PRIMARY KEY CLUSTERED

    (

    [API_TM_TrackId] ASC

    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

    ) ON [PRIMARY]

    GO

    --------------

    CREATE NONCLUSTERED INDEX [IX_API_Track_Master] ON [API_Track_Master]

    (

    [API_TM_Number] ASC

    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

    -----------------------------------

    CREATE NONCLUSTERED INDEX [IX_API_Track_Master_1] ON [API_Track_Master]

    (

    [API_TM_Valuedate] ASC

    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

    ----------------

    ALTER TABLE [API_Track_Master] ADD CONSTRAINT [PK_API_Track_Master1] PRIMARY KEY CLUSTERED

    (

    [API_TM_TrackId] ASC

    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

    ----------------

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply