throttle things so that use of certain linked servers get less resources

  • hi we use 2019 standard sql server for both our ssrs traffic and as a major homegrown dw/tabular cube source on the same server.

    We want to add a linked server to oracle on that server but would prefer that resource wise it get less horsepower/priority than everything else running on that server.

    If i recall, at https://www.sqlservercentral.com/forums/topic/home-grown-resource-governor a discussion took place in which normal query priorities could be throttled by "groups" in sql.    And those groups were made up of logins, accounts etc.   We would use a specific credential on this linked server.   As i recall, using the resource feature is "klunky" in that once you start using it, you have to have groups for everybody regardless of their priority.

    Can somebody offer some advice/clarity?

    Had we been able to figure out how to go thru data sources instead of open query/linked servers for reports, would we have been unable to achieve this resource priority scheme?  We got errors when trying to go thru oracle data sources for reports and havent had the time yet to uninstall and re install oracle to get that option working.  But maybe that was a good thing?

    • This topic was modified 2 months, 1 week ago by  stan.
    • This topic was modified 2 months, 1 week ago by  stan.
  • IMHO (and a tiny bit of experience more than a decade ago), it would be more effective to have the Oracle side do the necessary "cube prep" and export the results to a TSV file and then do a BULK INSERT of that file on the SQL Server side.

    If you use the "Swap'n'Drop" method of repointing synonyms from the currently active table to the newly loaded table, there will be virtually no "down time" on the SQL Server side.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • thx Jeff, you are basically suggesting ETL loading data incrementally after reshaping into a star.   we actually talked about that but arent ready.  we do it in other business areas but arent going to do it here quite yet.

  • What I'm actually suggesting is to avoid all sorts of problems.  You have a process that you want to "throttle" because it uses a lot of resources.  Consider what that does... it take resource usage that probably isn't necessary and stretches it out over a longer time which means that other stuff is also going to continue to be slower for a longer time.

    And, no... I'm not suggesting "reshaping into a star".  I'm suggesting a technique known as "pre-aggregation" that can seriously reduce the volume of data needed to be copied, or transferred or...

    For example, I have a demo where a 100 MILLION row sales table is reduced down to only 29,220 because the lowest resolution required is a whole day and there are only 10 years of days for 8 products.  Cross-tabular cube analysis by year, quarter, month, product takes about 250 MILLI seconds and that includes the time it takes to display it in SSMS.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • thx Jeff for your time.   Unfortunately the reports needing the data are operational.   So the data they need is pretty detailed.   So the volume is going to be a factor anyway.

    Your opinion is important to me so I'd like to ask...other than using linked servers and the source being oracle, this isn't much different from out more common reports that read detailed data off a sql server.  And yet, we still feel like this is going to drag down performance on both the server's other duties and in the eyes of the user, more noticeably than similar sql based reports .  Do you agree?  Do you remember from that prior experience why you agree if we take pre aggregation out of the conversation?

    We know one thing for sure.  Any kind of tuning on the oracle side where things generally turn into materialized views are in the hands of one or 2 people who arent sql folks.  And I know from just a tiny bit of experience that uploading a large number of records from oracle thru a linked server takes a lot of time.  But I dont have any background explaining why that might be.  So its tough to make arguments against those objects without a leg to stand on.

    • This reply was modified 2 months, 1 week ago by  stan.
  • Piling on Jeff's solid advice, the term "throttle" can have different meanings! My perception is that the requirements are not clear enough!

    😎

    Far too often, throttling is thought of as a pipeline constraint, one should rather think of the problem as a Cost Of Query Optimisation problem. Do the math! A low resource system might cost x per time unit, a query will take y time. A higher-resourced system might cost xx per time unit but the query would take y/100 time units.

    Throttling is not the same as choking!

  • stan wrote:

      And I know from just a tiny bit of experience that uploading a large number of records from oracle thru a linked server takes a lot of time.  But I dont have any background explaining why that might be.  So its tough to make arguments against those objects without a leg to stand on.

    this is a common issue when transferring data from Oracle to SQL.

    Most will create a linked server, then do a straight "insert into sql_table select * from oracle_table" - with or without filtering and with or without selecting just the required columns.

    the above is the very worst way of doing it due to data conversion (same issue arises if doing the transfer through SSIS).

    the best way is to use openquery (use dynamic SQL if required to pass filtering parameters) and convert all numbers and dates to strings on source - if required on the SQL Server use a staging table to hold the data before converting to final and correct datatypes (not always required when converting from string to other datatypes).

    example

    declare @sql varchar(max)
    set @sql = 'select *
    from OPENQUERY (oraclelinkeserver
    , ''select t1.f1
    , t1.f2
    , TO_CHAR(t1.f3, ''YYYY-MM-DD hh24:mi:ss'')
    , TO_CHAR(t1.f4, ''FM99999999990.00'') -- change accordingly to max size of number

    from oracletable t1
    where t1.date_column between TO_DATE(''''' + convert(varchar(26), @parmdate, 105)
    + ''', ''''DD-MM-YYYY hh24:mi:ss'''')
    '

    insert into sqltable
    exec (@sql)




    another main aspect of it - NEVER EVER join local sql server tables to remote oracle (or any linked server for that matter) tables.

     

     

     

  • frederico_fonseca wrote:

    the best way is to use openquery (use dynamic SQL if required to pass filtering parameters) and convert all numbers and dates to strings on source - if required on the SQL Server use a staging table to hold the data before converting to final and correct datatypes (not always required when converting from string to other datatypes).

    Preach it!

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • thx Eirikur,

    throttling in this context means he gets some relatively lower priority if we go with a "live experience".  That (live experience) should have been clear from the original post.   I'm pretty good at math but am having difficulty following yours.   I think what you want to say is that getting todays additional (incremental) data and inserting it in sql alongside with what we already have is economically wiser than going back to the well (which is slow) over and over.  And as for unclear requirements that are probably constrained by sql's abilities anyway, I had to start somewhere.

  • As far as I know you can't really throttle linked servers

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply