Since the original editorial we've had Solomun Rutzky's excellent Stairway to SQL CLR series which pretty much drove a stake through the heart of a lot of the myths and assumptions surrounding SQLCLR. Solomon's own SQL# product demonstrates the art of the possible and is a useful toolkit.
I found that user defined aggregates, while mechanically producing the result I wanted, created expensive execution plans. This wouldn't matter on a DB Server for a small team of data scientists but on a machine with a much wider audience with SLAs to adhere to it very much did matter. My experience with other analytical databases is that they are very good at what they do but I would doubt that they could handle a large number of users.
An evolution I have seen is the adoption of a broader spectrum of tooling for data processing activity. Whether this be NOSQL, search engine technology, languages, compute frameworks like Apache Spark and Pandas or something else entirely. SQL Server does have an incredibly broad range of capabilities and with that comes an equally broad range of possibilities. However, I've seen a marked preference for choosing a broader range of technologies, perhaps too much so.
There are things that T-SQL is brilliant at. There are things that it can be made to do such as complex string manipulation, which are much better handled in SQLCLR. However, as soon as you decide that T-SQL might not be the answer you also find that the idea that using SQL Server as the platform for what you intend to do starts to gain resistance.
If you ask people to pin down what their objection to using SQL Server then, in my experience, aversion to monolithic architectures will be mentioned, specific open-source tooling will be mentioned, licencing costs together with over-dependence on a vendors etc.
I have seen technologies fall out of favour, not for their intrinsic faults, but because by the time people had figured out how to use them properly the world had moved on.