Here are a couple of things to consider.
When SQL2005 introduced VarChar(max) I encouraged Devs to adopt the new data type immediately and most did. Eventually, we decided to script the column data type change from Text to Varchar(max). We applied the change and about 100 or so columns were changed in a few minutes.
Unfortunately, it turns out that BCP looked at Varchar(max) and Text differently. A couple of weeks later we found that literally dozens of routines had to be reworked to make BCP treat the new data correctly.
I disagree with the prohibition against Float and note that if you decide you must get rid of Float you will also have to handle Real. IMHO, Float/Real are powerful tools that a competent computer scientist must understand and know how/when to use. Like GUIDs, Cursors, and triggers; Float is not inherently bad, but it is often misused, abused, and misunderstood. In fact, IMHO both of the URLs you provide demonstrate that lack of understanding. For example "Decimal/Numeric is Fixed-Precision data type, which means that all the values in the data type reane can be represented exactly with precision and scale. "(sic). This is simply not true. Every number base we commonly use has an infinite number of "irrational" numbers (e.g. in Base 10 - PI, e, 1/3, ...) that cannot be exactly represented regardless of the number of digits one chooses. And these values crop up constantly in our calculations of Area, Volume, Average, Standard Deviation, Amortization, etc.
Check out "What Every Computer Scientist Should Know About Floating-Point Arithmetic "
Don't throw a well designed, well implemented, powerful tool out of your box just because you don't understand how to use it. Take the time to learn its features and own it.