• This is an interesting idea, and I think I get the author's point - why use the FTS "sledgehammer" for simpler tasks that don't require all the extra functionality? One thing that might add value to the tokenization/matching the author presents when compared to FTS is the ability to do approximate matching: phonetic, edit distance, n-gram, or common substring matching (or maybe some combination of these). You can actually get very good performance and good accuracy matches from a set-based n-gram solution.