I have accepted this recommendation due to the performance impact, and don't argue that.
But I find that this has killed for me the utility of object-ownership in general. Has anyone made good use of object ownership since following this "always owner-qualify object references" guideline? I haven't. Just an extra "dbo." that I have to add everywhere--more typing with no benefit.
But back in the day, before this advice was proffered, we used to be able to use the ownership mechanism to implement some pretty cool workarounds. Applications & procs had to reference objects with ".." instead of explicit owner name for this to work. When/if we created non-dbo-owned objects, a subset of users would access these instead, getting whatever different behavior was desired. I guess this could get out of hand if used recklessly, but when used carefully this was a great solution to some problems. Can't use this anymore because all of our production code has "dbo."
Why, yes, I would like some cheese with that whine....
I'm relatively new to SQL Server and have been using it for a web based intranet application. Even though I have not encountered any problems, yet, from failing to qualify objects with the owner, this stills seems like good advice to me.
Thanks for the article.
This is great advice.I would be curious to know however how many environments allow people to create objects with other than DBO as the owner. In both environments where I have been a dba now neither of them has used user ojects, all objects are dbo. I think this is a great bit of functionality but I wonder how many people actually really use it.
Good advice as long as you are not trying to use the ownership rules to your advantage.
I would add that "Qualify object name by its owner" on the Script tab of the Options dialog in QA should be checked. The script generator in EM qualifies objects with the owner by default.
Curious.. if you we're aliased to dbo and all objects referenced where owned by dbo. would fully qualifying dbo. owned objects still be a benefit?
(dwebb) - That really is the question. As (bozo7) put it, it seems as though most people do not take advantage of multiple ownership of objects, which, based on my experience, I would have to agree... it seems to be a level of "complexity" that many administrators oftentimes don't take the time to investigate, and admittedly, in many environments may not be the appropriate way to architect the system anyway.
So, that truly does beg the question: Is this the appropriate "worst practice" in an environment that ONLY utilizes dbo owned objects? As the article pointed out, if you ever STARTED to take advantage of user-based ownership of objects in the database, then the "dbo" qualification can work to your advantage in applications that have been constructed in that manner.
But, as (dwebb) asked, is there a true performance difference in an environment that aliases everybody to dbo? If you NEVER qualify any of your objects, and always assume dbo, are we really gaining anything? I.e., will we wind up with a cache hit or not?
If so, then from a performance standpoint, this is a non-issue.
If not, then why? If all of the objects are owned by the exact same user, and that user is dbo, and this configuration is utilized by the majority of SQL environments (ok, big assumption there, but...), woudn't it only make sense to assume dbo first, always, and forever, amen, REGARDLESS of whether we tell the system that this is the object we mean?
Actually, this topic got me thinking of another really bad practice along the lines of "object qualification". Perhaps this is not the thread for this discussion, but I'd be curious to see how everyone responds.
The bad practice I'm referring to is SERVER.DBNAME qualification (in "user" databases of course... don't give me a hard time for master, msdb, et al). The problem in this case is not performance; it's code reusability.
Once you stick that SERVER or DBNAME in your qualification, you're done. Actually, I've only seen the DBNAME ever really hardcoded, as most people know better than to stick a SERVER qualification, unless you're using a linked server, in which case you kind of have to.
But, back to the point... Of course the problem with dbname() qualification is the inability to move the SQL to another database of a different name. The only time (that I can think of) that you would want/need to do this is for cross-database SQL. Then, in that case, for architecture reasons, I've found it better to take a hit on performance (by utilizing dynamic SQL), rather than hardcoding a database name into SQL.
ok, go ahead, shoot me down or ignore me...
This article is very misleading on one important point. The article comes very close to stating, or at least implying, that you must specify owner names for cache reuse to work. Although the author didn't say that explicitly, I think that unless a reader already understood cache reuse (before reading the article), then they would walk away from this article thinking that you must supply owner names or you will not get cach reuse. This is simply not true.
What is true is that cache reuse is based on the text string being executed, after adjusting for parameters. In other words, if SQL has seen your parameterized string before (even if SQL had to auto-parameterize it), it will obtain a cache hit (assuming for the moment a stable set of session settings for both invocations).
If you look at the way the author's script is layed out, all it proves is that if you compile two different SQL strings (adjusted for parameterization), you get two different cache inserts. This is true whether or not user names are explicitly used. The simple script below shows that you can obtain cache hits using default object names. Trace the following script and you will see a cache insert for the first sp_executesql call followed by a cache hit for the second call, even though the script's SQL string does not specify an object owner name.
USE NORTHWINDDECLARE @sql nvarchar(200)DECLARE @params nvarchar(100)DECLARE @intvar int-- This will generate SP:CacheInsert in SQL Profiler:SET @sql = N'SELECT OrderID, CustomerID, EmployeeID, OrderDate FROM Orders WHERE EmployeeID = @empid'SET @params = N'@empid int'SET @intvar = 5EXEC sp_executesql @sql, @params, @empid = @intvar-- This will generate SP:ExecContextHit in SQL Profiler:SET @intvar=6EXEC sp_executesql @sql, @params, @empid = @intvar
USE NORTHWINDDECLARE @sql nvarchar(200)DECLARE @params nvarchar(100)DECLARE @intvar int
-- This will generate SP:CacheInsert in SQL Profiler:SET @sql = N'SELECT OrderID, CustomerID, EmployeeID, OrderDate FROM Orders WHERE EmployeeID = @empid'SET @params = N'@empid int'SET @intvar = 5EXEC sp_executesql @sql, @params, @empid = @intvar
-- This will generate SP:ExecContextHit in SQL Profiler:SET @intvar=6EXEC sp_executesql @sql, @params, @empid = @intvar
There is enough misunderstanding about this topic out in the wild that we should really be careful how we talk about it here, where lots of people come looking for SQL expertise. Otherwise, we may wind up reading an article some day entitled (and I hope the other Chris takes this in good humor) Worst Practices: Potentially Misleading Worst-Practice Articles.
I didn't have time to read through all the other posts, so I apologize if anybody else has covered this.