This article, like many others on the topic, seems to perpetuate a very common misconception regarding the use of multiple files and filegroups.
The misconception is that there is no performance benefit to creating multiple files on the same disk or array. This is simply not true. SQL Server creates a single I/O thread per physical file. With only one data file, there is no way to have SQL Server perform parallel reads. It can be seen by watching disk queue length in perfmon, that often when it appears that SQL Server is I/O bound, the physical disk queue is often very low or even zero. This occurs because SQL Server is bound by the fact that it is only reading data through a single I/O thread and therefore not taking advantage of all the physical I/O bandwidth provided by the disk subsystem. The solution is to create multiple files and use filegroups to place objects on other files to ensure parallel reads.
Also, the argument that separating indexes is a bad idea because it does not allow you to backup a table as a unit is really an inadequate argument. By separating indexes and table data on separate filegroups you speed up the processing of bookmark lookups when a non-clustered index is used. In the 7 years I've been doing database consulting, I have yet to see anyone really use the feature of filegroup backup. Usually, anyone who would need to backup the database in parts is already at the point where they are employing other fault tolerance techniques such as BCV's.