• It looks like a version 1 product really, with all that implies.

    We are very much at the R&D stage with no plans to go further yet. I currently have a data structure in place that comprises about 60 tables with associated views, stored procedures, functions etc. I've had to make some compromises to put it up because some features are deprecated sooner in Azure than SQL Server 2008 e.g. OPENXML is not supported.

    Because there are charges to move the data in and out Microsoft say it's only really cost effective if you deploy an application in Windows Azure that goes with it (therefore all the data stays in the cloud). I haven't tried this as the application I'm using doesn't lend itself to doing that very easily, so I can't confirm what the charges wopuld be in practice.

    DML looks quite quick (caviat: low volumes), DDL is much slower (ten times slower). I'm in the UK and on the European data centre.

    You have to be very disciplined to use it as you are charged for whatever you put up whether you use it or not. The lack of tools like profiler make it unsuitable for development anyway. Might be OK for testing.

    Currently my gut feeling is that the more traditional third party hosting of SQL Server model offered by a number of vendors works better (for live systems), particularly given the 10GB limit in Azure. And frankly the sharding approach they suggest for larger databases is a non runner for most applications unless they were originally designed like that.

    That's a potted summary of my view.

    Hope that helps!

    Tim

    .