I can remember a heated debate about scaling out a DB and the DBA asked "why the hell would I want to scale out for a load this small"?
If your company is operating in a mature market then the market share you have may go up or down a few percentage points but I doubt you are going to get radical growth. I'd ask how much scale you need and does it require a technology change?
I'd certainly use a document store for new markets and prototyping new products.
Back in 2014 I listened to a presentation from 10Gen who are not MongoDB Inc on the subject of data modelling. They were very clear that data modelling becomes MORE important in the NOSQL world, not less. Getting the boundaries of what constitutes a document is extremely important.
People who can produce a well designed application object model will produce a competent document model just as they used to produce competent relational models.
People who collect, implement and invent anti-patterns will produce godawful document models just as they used to produce abominations in RDBMS. If you allow them to scale out you will find 5 large instances doing the work that 1 small instance should be able to cope with. If all you have to do is click a switch to throw hardware at a problem then that will be the predominant choice. Your costs are going to go sky high.
The appeal of the cloud is not that you can scale up, it is that you can scale down. In the physical world we used to provision for 3x peak load with the expectation that what we bought would cover demand for the next 3-5 years. Now we can provision for the load we actually have and actually achieve.
I'm sure many of us have worked with software engineers who have insisted that they could manage DRI in the app and we didn't need it in the DB. I can't think of anyone who proved that this was really achieved. I have lost weeks of my life to finding and dealing with non-DB DRI failures.
NOSQL came into being to satisfy specific needs where a more general solution reached its limits. If you are running an app that works at its optimum with a particular non-relational DB tech then fantastic. My experience is that the principle of "Data Is Shared" torpedoes the benefits of that solution. You have an app that works really well and has a particular access pattern that suits your chosen DB platform. Then someone wants to run BI workloads against that data. At the very least you need a reporting replica but more likely you need to shift the data into a.n.other data technology.
As soon as you talk about shifting data from A to B you create the need for a reconciliation process, incident management/resolution process. You create duplication of processes too. GDPR Right To Be Forgotten being an example. This is true regardless of the tech but in a world with different data technologies the effort to make sure similar processes work against different technologies is expensive, time-consuming and fraught with difficulties.
Ultimately it isn't about how fast one part of IT estate can run, it is about the whole. Think of it like a small car. You drop in a more powerful engine. Unless you think about the suspension, brakes, wheels and tyres it isn't going to end well.
How does CosmoDB perform with updates? MongoDB used to be appalling. Reads - excellent, writes - excellent, updates - is it broken?
- Look at you total cost of ownership
- Look at the productivity change end-to-end not just for the team gaining from the use of the NOSQL tech
- Watch your costs like a hawk