The App writes a new file using a filename that contains a timestamp and UUD. Then another asynchronous process pulls those files in, batches them and takes care of the rest. Then even if you app crashes, nothing is lost and result show up in the database with minimal delay!
The only file writing along these lines that I am familiar with buffers the output in memory - presumably writing a "page" when it has enough. I doubt that it outputs based on whole-record written (but it might? - e.g. splitting at a LineBreak) so with that method it seems that I would lose any in-memory buffered data at a crash, and quite possibly the last record written to file would be incomplete (i.e. the whole-page-write would contain a final, partial, record).
I get that logging to table may not be scalable, but at least I know that SQL will take care of all the COMMIT/ROLLBACK side of it and I don't have to worry about that 🙂 but seems to me with a file-write approach I've got a lot of careful edge-testing to do to make sure that it will fail-safe. but I'm not very familiar with the mechanics, its a long time (20 years at least 😛 ) since we changed from ISAM to SQL and I stopped having to write to files directly!, so it might be that I am worrying unnecessarily?