Unfortunately, many "experts" are considered to be experts simply because they publish very often or have written a book on a subject that happens to sell well because the people reading the articles or buying the books don't know any better and then those poor minions back them up by touting their exploits.
One of thousands and thousands of examples that exhibit this syndrome was a forum post that was advertised on LinkedIn. It talked about how great Recursive CTEs (rCTEs from here on) were for counting and how they were "set based" . Of course, nothing could be further from the truth. Even a well written While Loop can beat an rCTE that "counts" for performance and resource usage.
When that contrary proof was demonstrated with code, along with 3 different methods (also with demonstrable code) that blew the doors off of the rCTE method of counting, the author still insisted that the rCTE was better, easier, and faster and then also justified it as being "excellent" when row low counts were expected. The really bad part was, this author's minions jumped on the same band wagon even when there was fully demonstrable code that proved otherwise.
Other great examples of this "expert" syndrome come in the form of "holy grail" articles that seem to be (and usually are) very well written with fully automated test data generation, a test harness, and fully demonstrable code. But, under the covers there are "devils in the data" that, apparently, no one even considers. A great example of that problem are the articles that supposedly prove that the "XML Method" for splitting stings is the "best method". Because of the seemingly scientific nature of having a ton of test data and all the code producing repeatable results, no one realizes the impact that simply repeating the same row 10,000 times will have on the optimizer, which has actually made the worst method look like the best and vice versa. Unfortunately, the minions that read it and never make that realization consider the author to be an expert simply because of how well the article was written and previously popularity of the author gained by such articles or posts.
Unfortunately and frequently, when such authors are challenged with such contrary and fully demonstrable facts, they do things like censor the responses and even shut down all replies to their post. I even had one author tell me that he didn't want to take the time to do any additional testing because the article was complete and was moving on and then saw fit to write multiple other articles that contain the same horrible mistake they made in the first article... and their minions love it!
Don't get me wrong... there are true "Experts" out there and they do publish a whole lot but no author that I know of has gone without making a mistake or two. The problem is much like my Dad once said about books... "Half of all that is written is wrong. The other half is written in such a fashion that you can't tell".
So, getting back to what makes an expert? There are only two things, IMHO...
1. Being demonstrably correct most of the time.
2. Being humble enough to admit when they made a mistake and then correct it.
is pronounced "ree-bar
" and is a "Modenism
" for R
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
"Change is inevitable... change for the better is not".
"If "pre-optimization" is the root of all evil, then what does the resulting no optimization lead to?"
How to post code problems
How to Post Performance Problems
Create a Tally Function (fnTally)