First, a comment about the "official" topic of the article.
I do agree, to some extent - but with a twist. I think that all indexes, both unique and nonunique, should be considered tuning intruments. The part you can "code" is what I would call "business rules", and this should be enforced with constraints. SQL Server will automatically create an index for every primary key or unique constraint, so you get them by default.
If I need to implement a business rule that enforces uniqueness, I will use a primary key or unique constraint. If I can get additional performance by picking a smart order of the columns in the constraint, I will - but the main purpose of the constraint is to enforce the business rule.
When I create an additional index for performance, I will declare it as unique when I know that it will contain unique values. Which can only be the case if it contains all columns of a unique constraint or of the primary key. Protecting integrity is not what I create an index for. The only reason I declare it as unique is to give SQL Server's optimizer more information, not to enforce the business rule.
But that theory does not always fly. SQL Server's non-standard implementation of the UNIQUE constraint means that I sometimes have to use a filtered index instead of a constraint. And there are also edge cases where even with ANSI-compliant UNIQUE iomplementation, I would still need to use a filtered index to enforce a business rule. In those cases, I will have to grit my teeth and accept having to use a (filtered) unique index.
There are also some other elements in the article I want to comment on. The first is this quote:
"A non-unique index will not be used if the data is not of sufficient quantity. If there are only a dozen rows in a table, then the server will ignore indexes because it's faster to scan the table than it is to use an index. There must be just enough overhead involved in an index seek or an index scan to make it not worthwhile for small tables. I experimented recently and found that the server only started using indexes when there were tens of thousands of rows in the table."
Please experiment a bit more. This statement is completely incorrect. Even on tables with very few rows, indexes will be used.
My final comment is on the automated index maintenance. And that comment is: "please don't".
I have seen missing index recommendations that make no sense at all (for instance, add an INCLUDE clause for the primary key column), recommendations for indexes that already exist, and I have once had (and unfortunately lost) a script where SQL Server recommends an index, then does not use it for the same query but recommends a duplicate copy of the same index. Add automated index creation to that script, and you're right on track for misery,
There can also be lots of overlaps between missing index recommendations. I have seen indexes recommended on (Col1, Col2), (Col1, Col2, Col3), (Col1, Col3), and (Col1, Col3) with an include of (Col2). A single recommendation can satisfy all four - not with 100% effectivity but close enough and with much lower overhead. If each of those indexes is used, your script that drops unused indexes will not pick this up. But using a single index instead of all four would be a much better choice!
Dropping unused indexes is very dangerous. SQL Server tracks how often the index is used, not what effect it had. A single use of an index might save two hours, or a million uses might combined save a minute. And an index that is completely unused might be that single index that makes the difference between an end-of-year reporting job fninshing in an hour or running for two weeks (and holding locks on all tables).
I have seen your reply and it is welll possible that you know what you are doing. But by publishing this article, you will create lots of problems on other servers. Way too many people will see the article, fail to read the warnings in your code (tl;dr), or the discussion on the boards (tl;dr2) and simply copy/paste your code. You can say that they are responsible themselves for the problems they introduce the stupidity of copy/paste-ing code withhout really understanding what they do, and you would be right - but I think that authors have a responsibility too.