One solution is to add to the particular sensitive query an "optimize for" hint and supply a known value that gives the desired plan.
Or use "optimize for unknown", so the plan is not dependent on the statistics related to one single input parameter.
Beware!! Both of those options are GUARANTEED to give you BAD PERFORMANCE some of the time in the face of significant data value skew!!
Not always true.
Any solution that does not constant recompile and favors one plan over another will suffer the same.
Recompiling CAN be costly too on queries with many tables and/or indexes.
As long as you test with a set that has the same skew and good enough volume and find it working well, there is no problem with going for a middle of the road approach.
It remains a cost trade-off between the overhead of a generic (less then optimal plan) and the cost of constant compile overhead.
The current solution is also sensitive to skew, they just reduced the likely-hood picking the undesired plan and is thus not an airtight solution.
To get true control, the author could check up on plan guides, see BOL:
This topic describes plan guides and explains how they can be used to optimize the performance of queries when you cannot or do not want to change the text of the query directly
The only true solution is as I suggested for Microsoft to fix when cached plans are used and when not.
The simplest action would be to detect, hey...this is going wrong and then force a recompile on that specific query.
And there are other changes that can adapt to skewed data and optimize plans in stages (in the background) without negative effects on times.