Let's assume that the databases & queries are tuned (that's always a question for me though, tuning is an ongoing process, not a one & done proposition). For example, since we're talking 2016 and large scale systems, are you using columnstore indexes or clustered columnstore indexes? If not, are your queries analytical with lots of aggregations? Then you may have some query & index tuning opportunities remaining.
However, if they're tuned, your options are simple. Reduce the data being processed. Can't do that? Get bigger/faster/more memory, cpu & disks. Guaranteed, if things are running that slow, there's work being done somewhere. Whether it's memory or disk or cpu, find where the work is, and spend money on making it faster.
As to MongoDB or Hadoop, these are data collection mechanisms. They're great at it. Bar none, amazing. They are not reporting engines. In fact, most of the time, what you'll see is the data gets collected in MongoDB/Hadoop/Whatever and then it gets transformed into another database management system that's good at reporting, like SQL Server. So, no, I would absolutely not recommend moving to MongoDB to make your reporting queries run faster. They'll probably run longer, in all likelihood, a lot longer.
Nope. It's always back to the basics. What kind of queries? What kind of data structures? Where is the hardware bottleneck? Statistics maintenance. Index structures. The code, the code, the code. This stuff is what's necessary. There's no magic bullet or secret switch. It's always about the basics.