As a Cluster MVP and someone many think of when it comes to clustering SQL Server, I have to disagree. Not every configuration should be clustered.
While clustered configurations of SQL Server (and I hate using it in a generic sense - there is the WSFC which is the WIndows layer, FCI which is the clustered instance of SQL Server, and AGs which are availability groups and require a WSFC) can provide many benefits, there are a lot of things why you would not consider them.
1. You can't handle the administration. To truly deploy an FCI or AG, you have to really understand WSFCs and concepts like quorum. I still find people using disk only quorum for heaven's sake! (just an example)
2. While it is easier, and I did cluster back in the NT 4/SQL 7 days (Windows was more difficult than it was now, but SQL Server was the dog here, not Windows truth be told; the fact the article says SQL Server 7.0 was usable is laughable - clustering SQL Server wasn't really usable until SQL Server 2000 for many reasons), it's still an inherently more complex architecture. See point #1 - even as a Cluster MVP, no way I lead with clusters or recommend them where people can shoot themselves in the foot.
3. You can't upgrade OS versions or do things like change domains. If you have those scenarios, WSFCs are not for you.
4. We've moved beyond traditional shared storage with things like SMB 3.0 (introduced in 2012) and CSV (introduced in 2014) support. This makes deployments much more interesting and possibly scalable from a drive perspective.
5. The more nodes you have and the more instances you have, the more of a nightmare it is to patch. Do not under estimate the update scenario.
6. While technically true the only downtime during patching would be during the failover and script upgrade (but if you have multiple instances, you're not isolating but that's a whole other topic), you're not done. As someone who does this a lot, my patching IMO is not complete until I've tested things work after patching as they did before. That means fully testing failover wherever that instance is supposed to be able to run.
7. Geographically dispersed clusters in SQL Server 2012 shift the problem. Instead of a VLAN, now we have possible DNS issues. Easier from a high level, but introduces other challenges.
8. If you are going to new hardware every three years or so, you should refresh the OS, which means a new WSFC. Use log shipping or AGs to migrate and be done. Just putting in new nodes to an old WSFC seems pointless in most - but not all - cases.
9. You can't just add a VM to see if it will work. Hyper-V and VMware both have specific ways you can address disks if you're using shared storage. If you're not using iSCSI or SMB, no way this can happen. Same for going to a physical.
10. FCI + AG is definitely a complex architecture that can be downright ugly if you don't know what you're doing. Two words: asymmetric storage. And let's not get started on the quorum scenario if you're doing it over a distance ...
Well intentioned, but definitely flawed, article.