Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««1234

How SQL Server Chooses the Type of Join Expand / Collapse
Author
Message
Posted Wednesday, May 23, 2007 2:13 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: 2 days ago @ 1:52 PM
Points: 17, Visits: 377
Does anyone know how to solve this problem with a function? It appears that using hints is the only option. The WITH RECOMPILE option is not supported for user defined functions.
Post #368416
Posted Tuesday, February 12, 2008 9:57 PM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Wednesday, February 12, 2014 5:12 PM
Points: 40, Visits: 32
Wouldn't it be more effective to either call two different stored procedures (one for wide date ranges, one for narrow) or use the HINT option to force the JOIN type? That way you wouldn't incur the overhead of a RECOMPILE with every call of the stored procedure.

There's a good writeup on HINTs here: http://www.microsoft.com/technet/prodtechnol/sql/2005/frcqupln.mspx


Andy Hilliard
Owl Creek Consulting


Post #454855
Posted Wednesday, February 13, 2008 5:31 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Tuesday, August 10, 2010 5:07 AM
Points: 2,732, Visits: 23,078
Just another option...

Rather than using RECOMPILE in your procedure, using an OPTOMIZE FOR and specifying date parameters that give you your hash joins may give you better performance overall and avoid recompiling the procedure each time it runs.

It would force the optimization engine to avoid the loop joins all of the time, but if the performance of small date ranges os ok this way, your overall time to execute the entire set of batches may go down.
Post #454977
Posted Wednesday, February 13, 2008 7:05 AM


SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Thursday, April 17, 2014 12:36 PM
Points: 488, Visits: 1,925
Wow, thanks for the article. This is the kind of mystery that would have me banging my head against a wall for days. I was unaware of sp_recompile and with recompile. I'm going to research these options immediately.

Thanks for saving my head :D
Post #455027
Posted Wednesday, February 13, 2008 8:43 AM


Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Wednesday, April 16, 2014 9:46 AM
Points: 3,150, Visits: 1,900
There have been scripts posted on the site before to go through and recompile all stored procs and refresh all views. These have been particularly helpful for me in the past.

Good article topic!

Here is one of them:
http://www.sqlservercentral.com/scripts/Maintenance+and+Management/31902/

Post #455132
Posted Tuesday, February 19, 2008 1:59 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, March 04, 2008 4:33 PM
Points: 4, Visits: 42
Another technique we use to disable the so called "parameter sniffing" instead of WITH RECOMPILE is to declare local variables to replace the parameters immediately inside the stored procedure. That way, the optimizer would not know to use ("sniff") a particular set of parameter values passed in during 1st execution to come up with the query plan, but rather use the typical values. So we would not run into the risk of a plan optimized on "atypical" values. The advantabe of this technique is plan-reuse - the plan may not be the optimal for all scenarios, but probably good enough for all of them.
We have tested that technique for some of our stored procedures and it seems to be working fine. We are weighing that against WITH RECOMPILE to decide when and where to use either.
Post #457632
Posted Tuesday, April 01, 2008 2:02 PM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Tuesday, April 15, 2014 8:02 AM
Points: 74, Visits: 486
If I have a situation where Query Analyzer and .NET, both running under the same parameter values, take different time to finish (.NET is slower), can I assume that Query Analyzer and .NET are using different cache plans?

If so, are both plans stored in a system table?

Thanks,
Luiz.
Post #478043
Posted Friday, November 21, 2008 6:56 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Monday, November 24, 2008 10:29 PM
Points: 1, Visits: 1
Hi friends,

I am new to this and my issue is my stored procedure runs very fast in management studio less than 1-2 seconds. And when i try to get those results in a dataset it takes 8-10 seconds. My Application code is below
myConnection.Open();
SqlDataAdapter MyAdapter = new SqlDataAdapter(myCommand);
MyAdapter.Fill(ds);

any one can suggest any ideas?

Jino
Post #606477
Posted Friday, November 21, 2008 7:27 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 6:11 AM
Points: 12,744, Visits: 31,075
here's one possibility:
if it's not parameter sniffing as mentioned above, which uses a bad execution plan so it's slow,
it might actually be the data and the time it takes to move the data over the wire (or wireless)

i believe if i select 10 billion rows from a table with QA or SSMS, it doesn't really fetch all 10 billion rows...it executes the query, but really grabs the top few hundred rows, and as i scroll down, it fetches more and more rows...
it doesn't do it all at once.

but if i try to move those ten billion rows to a dataset, it's gotta move all ten billion rows over the network instead of the top few hundred, and then the local disk has to allocate some space in memory for the dataset, which might get paged to the harddisk because there;s not enough ram....

how big is the recordset you are grabbing.


Lowell

--There is no spoon, and there's no default ORDER BY in sql server either.
Actually, Common Sense is so rare, it should be considered a Superpower. --my son
Post #606513
Posted Friday, November 21, 2008 8:39 AM


SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Thursday, February 06, 2014 12:59 PM
Points: 801, Visits: 1,962
I agree with Lowell, mostly. If jino.jose had said "Enterprise Manager" then I would not even posted a reply. Management Studio does get the whole recordset. Open a big table, sit back, and watch your memory drain away.

You are right about pushing the data "down the wire". That could be the culprit. Same thing to Luiz. Even if they are using the same plan they don't use the same libraries to fetch the result set.

I remember the same controversy when ADO came out. More capable, takes more memory, works slower.


ATB

Charles Kincaid

Post #606580
« Prev Topic | Next Topic »

Add to briefcase «««1234

Permissions Expand / Collapse