Any way to assign a lower priority to a query / view?

  • Hi all,

    I'd like to create a set of views which users will connect to through Excel (using the From SQL Server Data Source of the External Data tab), and which will allow them to view the data in real-time, in a read-only fashion. 

    As far as I'm aware, this functionality within Excel performs a one-time import of the data, and can be refreshed through the "Refresh Data" function within the Table Tools External Table Data tab.

    What I'd like to know is, is there any way I can tell SQL server, either through Excel or through the view definition (or through any other technique!), to put this query at a low-priority? What I want to avoid is having users that are using these views tying up system resources that would otherwise be used processing updates or queries on the underlying tables originating from the website.

  • The short answer is: No, you can't assign execution priority to a SQL Server request the same way you can a Windows process. Also, the issue isn't just CPU and I/O utilization but also the query's reads (shared locks) blocking updates (exclusive locks). 

    I don't know what resources are at your disposal, but the typical way that corporate database applications approach the issue of ad-hoc query users is to implement scale-out nodes using replication or always on availability groups.

    I once supported a call center application with several hundred users that was very query intensive, in addition to being very update intensive. What I did for some of the most commonly accessed dashboard and reporting views was write a stored procedure that would query and then persist the resultset into a flat summary table, but only if the previously persisted resultset was older than something like 5 minutes. If the most recently persisted resultset was still "fresh", meaning queried less than 5 minutes ago, the procedure would query the resultset back from the summary table, apply filtering based user specific input parameters, and then return the resultset to the user. This technique has to be done judiciously and obviously isn't practical if the end user requirement is that the data be as fresh as possible.

    However, there are a handful of simple (but not necessarily ideal) measures you can take to mitigate the performance impact of ad-hoc and potentially resource intensive queries in an online high-volume transactional database. For one thing, you can use the session setting "SET DEADLOCK_PRIORITY LOW". This specifies that the current session will be the deadlock victim if it is involved in a deadlock and other sessions involved in the deadlock chain have deadlock priority set to either NORMAL or HIGH. You may also "SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED" to disable the query's use of shared locks, but this can potentially impact the accuracy of the resultset. For example, you may end up missing or double counting a handful of rows that are currently modified and uncommitted. You can read up on "sql server nolock dirty reads" for more detail and decide if this is an acceptable trade-off. You may also "SET TRANSACTION ISOLATION LEVEL SNAPSHOT" to prevent the query from holding read locks and blocking updates, without the side effects of READ UNCOMMITTED, but this can potentially impact performance in a database with a high-volume of updates, because updated pages are cached to TEMPDB.


    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Hey Eric,

    Thanks for the info!

    The idea of having a snapshot table, or possibly even just a snapshot database, that is refreshed on a periodic database is something which I considered, and if the volume of requests against this view became high enough, I might explore that possibility, but for now I'm not expecting a large amount of traffic, so I don't think I need to worry about that. In order to do the settings you mentioned, do I simply alter my view definition as follows?

    SET ANSI_NULLS ON
    GO

    SET QUOTED_IDENTIFIER ON
    GO
    SET DEADLOCK_PRIORITY LOW
    GO
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
    GO
    ALTER VIEW [dbo].[____]
    AS 
    ....
  • I'm with Eric, no way to do what you're asking.

    However, you can also look to the Resource Governor as a mechanism to limit the resources used. However, it's going to depend on the logins used and it will limit more than just a single view or procedure. Read up on it at the link.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • kramaswamy - Thursday, January 26, 2017 1:23 PM

    Hey Eric,

    Thanks for the info!

    The idea of having a snapshot table, or possibly even just a snapshot database, that is refreshed on a periodic database is something which I considered, and if the volume of requests against this view became high enough, I might explore that possibility, but for now I'm not expecting a large amount of traffic, so I don't think I need to worry about that. In order to do the settings you mentioned, do I simply alter my view definition as follows?

    SET ANSI_NULLS ON
    GO

    SET QUOTED_IDENTIFIER ON
    GO
    SET DEADLOCK_PRIORITY LOW
    GO
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
    GO
    ALTER VIEW [dbo].[____]
    AS 
    ....

    No, deadlock priority and isolation level are not defined at the DDL level but rather at the session level when the T-SQL command is executed. The above SET statements, the way you're using them, would not effect the behavior of the view you're creating. If users are selecting from a views, then they will need to supply the SET statements prior to the SELECT statement like so.
    SET DEADLOCK_PRIORITY LOW;
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
    SELECT a, b, c FROM MyView WHERE z = 123;

    It is also possible to enable READ UNCOMMITTED using the NOLOCK query hint like so:
    SELECT a, b, c FROM MyTable (NOLOCK);

    Alternately, you can contain the SET and SELECT statements within a stored procedure, and then the users simply execute the stored procedure.

    I'm not that familiar with usage of SQL Server's Resource Governor, but my understanding is that it would involves pre-allocating resource pools, essentially reserving CPU and memory capacity, for use by these specific ad-hoc workloads. If you're trying to manage resources on a high-volume transaction server with limited resources to work with, this might not be manageable.

    For your needs in this scenario, you might want to consider something like summary tables that are refreshed on a scheduled job that minimizes the frequency of the querying while still keeping the data fresh enough for the needs of the users. For example, I've retrofitted daily reports so they would feed from summary tables populated on a nightly schedule. Once done, the users can fire off that report 1,000 times with minimal impact, and the data is always consistent because it's static. As an added benefit, if you key your data using something like a ReportDate column, and retain the summary data from one day to the next, then it provides historical reporting when needed. This could be the first baby step in a more long range plan for implementing something like a reporting data mart.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell - Friday, January 27, 2017 8:59 AM

    kramaswamy - Thursday, January 26, 2017 1:23 PM

    Hey Eric,

    Thanks for the info!

    The idea of having a snapshot table, or possibly even just a snapshot database, that is refreshed on a periodic database is something which I considered, and if the volume of requests against this view became high enough, I might explore that possibility, but for now I'm not expecting a large amount of traffic, so I don't think I need to worry about that. In order to do the settings you mentioned, do I simply alter my view definition as follows?

    SET ANSI_NULLS ON
    GO

    SET QUOTED_IDENTIFIER ON
    GO
    SET DEADLOCK_PRIORITY LOW
    GO
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
    GO
    ALTER VIEW [dbo].[____]
    AS 
    ....

    No, deadlock priority and isolation level are not defined at the DDL level but rather at the session level when the T-SQL command is executed. The above SET statements, the way you're using them, would not effect the behavior of the view you're creating. If users are selecting from a views, then they will need to supply the SET statements prior to the SELECT statement like so.
    SET DEADLOCK_PRIORITY LOW;
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
    SELECT a, b, c FROM MyView WHERE z = 123;

    It is also possible to enable READ UNCOMMITTED using the NOLOCK query hint like so:
    SELECT a, b, c FROM MyTable (NOLOCK);

    Alternately, you can contain the SET and SELECT statements within a stored procedure, and then the users simply execute the stored procedure.

    I'm not that familiar with usage of SQL Server's Resource Governor, but my understanding is that it would involves pre-allocating resource pools, essentially reserving CPU and memory capacity, for use by these specific ad-hoc workloads. If you're trying to manage resources on a high-volume transaction server with limited resources to work with, this might not be manageable.

    For your needs in this scenario, you might want to consider something like summary tables that are refreshed on a scheduled job that minimizes the frequency of the querying while still keeping the data fresh enough for the needs of the users. For example, I've retrofitted daily reports so they would feed from summary tables populated on a nightly schedule. Once done, the users can fire off that report 1,000 times with minimal impact, and the data is always consistent because it's static. As an added benefit, if you key your data using something like a ReportDate column, and retain the summary data from one day to the next, then it provides historical reporting when needed. This could be the first baby step in a more long range plan for implementing something like a reporting data mart.

    It's not so much reserving CPU, etc., although you can do that, as limiting the amount of those resources available. You don't have to set up Resource Groups for all accounts. You can selectively assign certain accounts to a resource group and then limit what that resource group has access to, while leaving all other logins alone. It sounds like an imperfect solution here, but may be better than nothing at all.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply