Bit late to the party so hopefully you still read this.
Using a document store or some form of data lake with a RDBMS is a good step. I've been using a managed data lake solution called Azure Data Lake Store with Azure Data Lake Analytics within Azure. Choosing this option means I don't have to manage the full Hadoop ecosystem such as Sqoop, ZooKeeper, etc and I can fully integrate Active Directory across the stack from the data lake -> data warehouse -> data mart.
Leveraging the data lake allows an accessible location to store all my raw data that can be queried in raw format by the end users BEFORE it's processed by whatever ETL or ELT you may use. It also allows me to offload, like you are looking to do, a lot of that process into another system, either for real-time loading of data or batch processing of data. This means, less stored procedures for SQL and most importantly, less work.
If SQL Server was a wheelbarrow, it was constantly carrying all the rocks. Leveraging a data lake means we have more than one wheelbarrow helping distribute those rocks so SQL Server can focus on more important things, like serving and securing the data.
So yes, it's good thinking. I just again, would leverage more managed services than trying to manage Hadoop yourself. Newer versions of SQL Server and Azure Data Warehouse all come with Polybase now, which is the equivalent of Hive for Hadoop. You can connect it to Azure Data Lake Store or S3-like buckets such as Blob storage. It's really nice.