The choice of example seems reasonable for demonstration, but does not seem at all reasonable for a real-world scenario.
The author asks the rhetorical question, "Why do we not have a single column for this purpose?" I think there are a lot of reasons, including: normalization (and all of its implications), the meaning of null if one value is known and the other is not, performance in evaluating column expressions and where-clauses, difficulting in using these types in ad hoc query or reporting tools, and so on.
These types, especially as defined through the CLR, should be used sparingly and only in situations where the values will be manipulated in an application where such structures are handled easily. SQL just isn't optimal for complex types and I have seen very very very few instances of user defined types in real applications in SQL Server, or even Oracle for that matter. Where they have been used, UDTs were not used in identifying rows or as basic attributes but represented whole objects and were processed in high level languages, not SQL, one at a time.
Have others seen widespread used of UDTs? Oracle's had this capability for a long time, and it is still rare to see UDTs in applications so I doubt that they'll catch on too much in SQL Server, except that Visual Studio will make it too easy for developers to add these.