There's something missing that I don't see in your article nor the Microsoft website.
If you convert bigint value 9223372036854775807 to float, the exact value of the result is 9223372036854775808. This is the nearest value that can be expressed in binary with 53 significant bits. SQL Server is doing this correctly. But STR(*,19) produces '9223372036854775800'. What we're not being told is that the value is being rounded to 17 significant digits before adding trailing zeros. The final result is off by 8 from the original number, but the conversion to float is only responsible for 1 of those.
The same with the second bigint value, 922337203685477580. The nearest value that can be stored in a float is 922337203685477632, and SQL Server is correctly making that conversion. But STR(*,19) rounds to 17 significant digits and then adds a trailing zero. The result is '922337203685477630'. In this case the conversion to float added 52, but after rounding the final result is off by 50.
This is perfectly reasonable. Consider this code:
declare @F float = 1e30
The exact value of @F is 1000000000000000019884624838656, the closest value to 10^30 that can be expressed with 53 significant bits. STR outputs '1000000000000000000000000000000' as you would expect.