I've searched SSC and have asked Google and the answer to my question seems more of a hack than a standard, supported, production ready process so I'll post the question here in hope of someone having solved the problem before me.
Scenario (SQL 2008 R2):
You receive a one off backup of live database. This is the only backup you will ever receive. At 24 hour intervals you will be sent (via FTP) a transaction log backup. You will never be given another full backup and you will get a log backup everyday, forever.
You have to provide the customer with a read/write copy of the database using the one-off full backup and the daily log backups. The customer is perfectly happy for the database to be overwritten every 24 hours and for any updates to be lost (yes, really!).
Some things to consider:
This could be manageable for a few days - i.e. restore backup, apply logs, off we go. But 10, 20, 100 days later it'll be a case of restore backup, apply 100 log backups! Full backup is ~50GB, tlogs 100MB - 2GB. Have to rule out log shipping, etc as live database is completely cut off from customers network.
The only solution I know is really...horrible. These are the steps:
1) Restore database with norecovery
2) Apply log file(s) with standby
3) Database is in standby/readonly mode
4) Take database offline
5) Create new empty database with similar number of data and log files
6) Take new database off line
7) At OS level replace newly created data and log files with copies from standby/readonly database (change file names to match those being overwritten)
8) Bring standby/readonly database online (ready to accept next days log backups)
9) Bring new database online (ready for read/write) -- this generates a warning but taking it back off line and on-line again seems to update the system catalog and doesn't report a warning.
Is there a supported/standard way of achieving this?