I have to disagree a bit with the opening quote in the article. My "wins" are learning episodes and so I'm always winning, not because I'm undershooting my potential, but because I'm always learning and increasing my potential. I never fail because my failures are a learning experience,which help with my learning and that increases my potential and that always makes it a "win".
Here's an example that I created in code this very night... I created a Clustered Index with more than 99% fragmentation and the intentional scan of that index was almost twice as fast as two other indexes that contain exactly the same data with the same number of rows, both of which had only 0.36% logical fragmentation and close to 100% page density. I scienced out the exact reason why that should be and then proved it with code.
Then I proved the unthinkable... I made the 99% fragmented index twice as slow by [insert drum roll here] defragmenting it! That was a fortuitous accident that I can't explain.... yet. It's not "cold fusion" either. I've repeated the experiment in several different manners with the same results.
Why am I doing this crazy stuff?
Someday I'll tell that story but it all boils down to one thing that I learned on Monday, the 18th of January, 2016, and it's especially true in the world of computers...
The burden of proof is on the beholder.