• I think the key to getting the best from serverless computing is in the design and architecture of the application.  The costs act as a spur to getting it right.
    Looking at transaction history do you see ebbs and flows in traffic either seasonally or hourly?  If you have a well designed asynchronous, distributed application then serverless computing can save you a huge amount of money.
    I see the transparency and visibility of the costs as a major advantage.  Having to over-provision to cope with peak load estimates for 3-5 years in the future is expensive and also doomed to fail.  A DB server geared up to cope with 3-5 years growth in traffic will have other apps and traffic shoe-horned onto it because it is an expensive resource with plenty of resource in year 1 but will run out of steam before its projected lifespan ends.
    One of the problems with a serverless architecture is working out how to test it in an automated fashion.  How do you unit test a serverless function?