suggestions on making more efficient and faster

  • And again -- and for the last time, so as not to upset everyone --  you likely need to re-cluster the Quality table to get even more performance gains, and likely across most of your processing, not just this one query.  Again, to be sure, would need to review index usage and missing index stats, but you could still do it just for a test of this query and see what gains you get.  Sounds like they should be quite substantial.

    SQL DBA,SQL Server MVP(07, 08, 09) Prosecutor James Blackburn, in closing argument in the Fatal Vision murders trial: "If in the future, you should cry a tear, cry one for them [the murder victims]. If in the future, you should say a prayer, say one for them. And if in the future, you should light a candle, light one for them."

  • Not sure what you mean by " re-cluster the Quality table".

    Thanks.

  • Bruin wrote:

    Not sure what you mean by " re-cluster the Quality table".

    Thanks.

    I think Scott means to drop the clustered primary key on table Quality (CONSTRAINT [PK_Quality] PRIMARY KEY CLUSTERED ([Quality_ID] ASC) and recreate it as non-clustered primary key, then create a new clustered index:

    CREATE CLUSTERED INDEX IX_Quality_1 ON Quality(Quality_Container_ID, Quality_Test_ID, Quality_Date)
  • any suggestions for dropping and re-creating on that large of a Table?

     

    Not sure how much of a time window I have to do the drop\create...  The Box it's on has 40gig on memory and

    6 logical processors, but the feeds to the data are 24/7.  The SQL release is 2012 STD.

    Thanks.

  • Bruin wrote:

    any suggestions for dropping and re-creating on that large of a Table?

    Not sure how much of a time window I have to do the drop\create...  The Box it's on has 40gig on memory and 6 logical processors, but the feeds to the data are 24/7.  The SQL release is 2012 STD.

    Thanks.

    How many rows does the table have?

    You will also have to disable all the referencing FKs while you drop and recreate the primary key.

    You should test this in a different environment. Do you have a system test area?

  • Yes, I have a Test env, and the Quality table has 100+ million rows.

  • Don't drop the existing table.

    Create a new, test (dummy) Quality table UNIQUEly CLUSTERED on ( Quality_Date, Quality_ID ).

    CREATE TABLE dbo.Quality_Test ( ... )

    CREATE UNIQUE CLUSTERED INDEX QUALITY_Test__CL ON dbo.Quality_Test ( Quality_Date, Quality_ID )

    Create the clus index BEFORE loading the table.  Don't create other indexes prior to load.

    Load the _Test table in batches, in current clus key order, from the main table.  Make the first batch larger than the rest, and be sure to specify WITH (TABLOCK) on the _Test table while loading.  You can specify WITH (NOLOCK) on the original dbo.Quality table.

    Once it's fully loaded, you can create the other indexes if you'd like.  Don't create any FKs, you don't need that hassle right now.

    Finally, test selected slow queries by changing table "dbo.Quality " in those queries to "dbo.Quality_Test ".

    Yeah, that takes some disk space, but is otherwise not disruptive of your current environment.

    SQL DBA,SQL Server MVP(07, 08, 09) Prosecutor James Blackburn, in closing argument in the Fatal Vision murders trial: "If in the future, you should cry a tear, cry one for them [the murder victims]. If in the future, you should say a prayer, say one for them. And if in the future, you should light a candle, light one for them."

  • Have you tested the query with the proposed changes?  The suggestions on the changes to the EXISTS should help quite a bit - and you really need to know what that performance is so you can compare the performance after modifying the clustered index.

    The second query you showed can be changed to use NOT EXISTS also - basically the same structure as the first query.

    Select @Alarm_Count = count(*)
    From dbo.Quality As q
    Where q.Quality_Container_ID = @Container_ID
    And q.Quality_Test_ID = @Current_Test_ID
    And q.Quality_Date >= dateadd(DAY, -2, @Current_DateTime)
    And q.Quality_Date <= @Current_DateTime
    And q.Result <> N'PASS' -- Probably not needed, unless you have alarm types of PASS%
    And q.Result Like @Alarm_Type
    And Not Exists (Select *
    From dbo.Quality_Attribute As qa
    Where qa.Quality_ID = q.Quality_ID
    And qa.Attribute = 'Comment'
    And qa.Attribute_Value Like 'In%Status%');

    In addition - make sure you define the PK on the Quality_Test table using a unique constraint (non-clustered) on the Quality_ID column.

    Another way to possibly improve performance would be to create a temp table for the Quality_Attribute check.

      Drop Table If Exists #quality_attribute;

    Select qa.Quality_ID
    Into #quality_attribute
    From dbo.Quality_Attribute As qa
    Where qa.Quality_ID = q.Quality_ID
    And qa.Attribute = 'Comment'
    And qa.Attribute_Value Like 'In%Status%');

    Then use that in the NOT EXISTS:

       AND NOT EXISTS (Select * From #quality_attribute qa Where qa.Quality_ID = q.Quality_ID)

    And if these queries are in the same batch - using a temp table should be more optimal than reading from the attribute table for each query.

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

  • There are multiple threads hitting using this process, how would the temp table work in that situation?

     

    Thanks.

  • Jeffrey Williams wrote:

    In addition - make sure you define the PK on the Quality_Test table using a unique constraint (non-clustered) on the Quality_ID column.

    That's not technically required on the _Test version of the table.

    SQL DBA,SQL Server MVP(07, 08, 09) Prosecutor James Blackburn, in closing argument in the Fatal Vision murders trial: "If in the future, you should cry a tear, cry one for them [the murder victims]. If in the future, you should say a prayer, say one for them. And if in the future, you should light a candle, light one for them."

  • Bruin wrote:

    There are multiple threads hitting using this process, how would the temp table work in that situation?

    Temp tables are local to the connection/process that creates them - multiple 'threads' would each have their own version of the temp table.  Based on the code you have provided it appears that both of these totals are returned from the same procedure/script - if so, then building a temp table may help, but the only way to know for sure is to test.

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

  • I changed both queries to use the Not Exists, and so far looks promising in the performance category.  I have a archive process setup

    to keep both Quality,the Detail and Atrribute tables to have only 2 weeks of current data. Since the tables are so large and no one ever ran the process do you have any suggestions on how to get this data archived, and not take weeks to accomplish. Each of the tables has there equivalent _archive table to hold the data. The archive works from the quality_date as the driver and purges\archives using a date parm set for nbr of weeks to look back..

    Thanks

  • Bruin wrote:

    I changed both queries to use the Not Exists, and so far looks promising in the performance category.  I have a archive process setup to keep both Quality,the Detail and Atrribute tables to have only 2 weeks of current data. Since the tables are so large and no one ever ran the process do you have any suggestions on how to get this data archived, and not take weeks to accomplish. Each of the tables has there equivalent _archive table to hold the data. The archive works from the quality_date as the driver and purges\archives using a date parm set for nbr of weeks to look back..

    Thanks

    This should be a new question - and should have been included as part of this question.  The fact that the table can and should have been purged might change how you define the indexes.

    As for running the purge process - you can batch the deletes.  How that is setup depends on the clustered index though - if your clustered index is on Quality_ID you would batch based on that value (assuming it is an identity).  If clustered on the date/id as recommended you would batch on the date.

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

  • If you cluster the table properly -- Quality ( Quality_Date, Quality_ID ) -- you likely won't have to purge them now.  That is one of the benefits of properly clustering the table.  You can access only the current data without having to purge existing data.  It may still make sense to purge the data, but it wouldn't have to be a rushed thing just to get decent performance.

    SQL DBA,SQL Server MVP(07, 08, 09) Prosecutor James Blackburn, in closing argument in the Fatal Vision murders trial: "If in the future, you should cry a tear, cry one for them [the murder victims]. If in the future, you should say a prayer, say one for them. And if in the future, you should light a candle, light one for them."

  • To process this data I setup Sql Agent jobs and basically setup one for each oven\wireline. It's processing the queue of records fairly fast but I do see a WAIT condition of "SOS_SCHEDULER_YIELD"... I have a 6 logical processor box with 40gig of memory, would there be anything performance wise I could tweak from an engine level?

    Thanks for replies and suggestions much appreciated.

     

     

Viewing 15 posts - 16 through 30 (of 39 total)

You must be logged in to reply to this topic. Login to reply