Rod, are you able to give some details as to what went wrong and why the switch to cloud was unsuccessful? It would be useful to compare notes.
In AWS there are certain things that you just can't do. Some of the server roles you don't have are
Some of those feel very awkward from the DBA perspective. When you have a shared responsibility model it becomes obvious why those roles are denied. Anything that can be used to threaten the underlying stability of the service is going to be locked down.
There are a whole load of other restrictions too. It's going to be the same in the other cloud vendors.
There are a lot of DBA headaches that simply go away. There is also a subtle shifting in the power dynamic in DB development. Some of the things a DBA does to save people from themselves are reduced so people have to accept that they are responsible for their own car crashes.
I'll try to answer the questions you've asked. (Although I took lots of notes during that time, I don't have access to those notes at the moment.) I work in state government for the health department, so when COVID-19 hit, it was a major disruption for us, as it was for most of the world. The order to vacate offices and work from home (WFH) meant no one could work in the office. (Obviously, exceptions had to be made, but they were very rare.) Add to that the fact we had to get web applications up and running to schedule people for COVID testing, results of tests, etc., all sorts of things related to COVID. The simple fact is we didn't have anything like what was required, and we couldn't work in the office. (As an aside, I LOVE WFH, as it has saved me at least 5 hours commuting daily; if I were lucky, which I often wasn't. And I don't need to be in the office.) Going to the cloud was the best choice. But we had no experience at it.
So, that resulted in lots of meetings with Microsoft (we decided to use Azure) in an effort to learn how to get applications into the cloud. Most of the developers (I'm one of them), all the DBAs, management from the CIO downwards, were all present at meetings with Microsoft, to learn how to get SQL databases into Azure, web applications up, etc. All applications had to be written new. And because nothing like this had ever been done, there was no SQL database we could migrate to Azure SQL Database, or even do a lift and shift. Lifting and shifting were not an option, for anything. There were heady days, as I saw the prospect of getting experience of current technologies, rather than the old stuff typically in use where I work. (By old, I mean 15 years old and older.).
Here's where things get weird. As far as designing a new database goes, I'm sure it could be done in SSMS or Azure Data Services by any of the DBAs. That wasn't discussed. Perhaps they did that outside of the meetings. That didn't happen in the meetings, so that's what I suspect happened. However, what we spent a LOT of time on was configuring networking and security, at both ends. Whole days would go by where our security people would be adjusting rules in equipment, and Microsoft people would make similar adjustments in Azure. This is where I cannot tell you what was going on, because security and networking aren't in my wheelhouse. I remember thinking to myself, why did we spend so much time at this? One networking/security problem would come up after another. Days would turn into weeks, with what to my eyes appeared to be little to no real improvement in networking and security. We did get a portion of an application written and tested creating an appointment for a COVID test.
But then, suddenly, POOF! Just like how retailers suddenly drop everything related to a major holiday like Christmas after its passed, it all just stopped. It was an extremely jarring experience. The applications all run on-prem now.
For reasons I don't understand, no one here does a retrospective on anything. At least, not from my point of view. Perhaps managers somewhere discuss why something succeed or fail, but if so, they never involved developers, DBAs, nor other IT people. So, from my point of view this is what I identify as what killed the move to Azure:
Whatever the DBAs did and how they proceeded, is hidden from me. And we've had so much turnover here that all the DBAs who were there in 2020, are now gone.
Networking and security, both of which are opaque to me, I feel had the biggest involvement with causing the failure to get into the cloud.
And lastly, I feel developers must shoulder some of the responsibility. The overwhelming majority of developers here are adamant about not adopting anything new. For example, all new applications here started by using .NET Framework 4.5.2, which went out of support many months ago. I warned about .NET 4.5.2 going out of support back in 2018, but my fellow developers ignored me. I remember once during those meetings with Microsoft, when they were trying desperately to get the ASP.NET MVC project shoved into Azure that was built using .NET Framework 4.5.2. It took days for them to admit to Microsoft that they wrote the app with .NET Framework 4.5.2, then the Microsoft personnel told them that Azure doesn't support that framework. There was stunned silence from the developers.
So, I identified two causes to our failure to get an app into the cloud and perhaps a third if the DBAs had caused something, but I wasn't aware of what may have gone wrong there. And I could have missed other causes.