I wrote about serverless applications recently. Not a lot of people are using serverless technology for their code, but a few have had a lot of success. It seems like a move that makes sense, though there are some challenges in managing the code when you deploy functions or snippets instead of an entire codebase. I worry a bit about tracking billing, usage, versions, and deployment pipelines with serverless, but I know things will get better over time.
Now Microsoft Azure has included a new option for databases: Azure SQL DB Serverless. This is a SQL Server database that bills you for the compute cycles used by the second. This works by essentially pausing your use of the database processing when clients are not accessing the system. You are still billed for storage all of the time, which makes sense, but storage is cheap. The system also has quite a few automatic scaling features, many of which aren't as simple to understand as I might hope.
How many times have you purchased a server (or rented one) and found it is barely used, trundling along at 10-20% CPU? I've done that quite a few times, especially when I had no idea of the workload early on. Later, it's often not been worth my time to try and consolidate the database on another machine. Often this is often a fear based response where the cost of the machine is already gone, and I don't want to take the chance that a burst in workload will overload another system.
For sporadic use applications, serverless databases might be a good fit. I can avoid paying for compute during low periods, such as overnight. I'm essentially renting a machine at specific times and not at others, but the compute layer gets provisioned as I need it. That seems like the ideal situation for a lots of apps, assuming I can run them in the cloud. There are some restrictions in preview, such as the inability to pause the system unless there are 6+ hours of no activity, but I'm hoping that changes. Six hours seems like a long time.
The one thing I think about this database service is that it will require longer timeouts and more resilient applications that can handle a warm-up period if the computer layer has been shut down. I also worry a bit about cache and the buffer pool. If you've ever dealt with servers that regularly restart, there is a bit of a slow period as the buffer pool fills and code is compiled. Perhaps Microsoft has ways of saving off some of these states, perhaps capturing plans in the Query Store that can avoid excessive compilations on restart, but I do worry that slow starts will increase user complaints and tickets filed. Those costs might not be worth the savings from shutting down your database resources.