What's essential is knowing what data access patterns are being used by each application and then partitioning, structuring, and indexing the data optimally. For ad-hoc aggregate reporting on millions or billions of records, a ColumnStore table like SQL Server DW, Hbase, or Amazon Redshift. For something like an online banking app, each user accesses a wide range of data elements but only for their personal slice of the pie. What you need is a near realtime datamart, maybe something like Cosmos DB or MongoDB, where all the recent transactional and profile data for a customer is contained in something like a JSON document which can be fetched within 10 ms.
RDMS engines like SQL Server are good for transactional read/write applications and for a highly constrained single version of truth, but even in the hands of a expert level SQL coder, it's only so-so when it comes to TB scale aggregate reporting and high volume queries. If you're willing to work outside the box, there are better options for the use cases described above.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho