Ok... I was finally able to jump back on this problem and get it resolved:
1) Did a full backup this AM.
2) Detached the database.
3) Renamed the LDF file to ".old".
4) Re-attached the database with no log file (new one was created).
5) Moved the log to a different disk, re-sized it to something much larger.
We then shut the system down (needed MS updates anyway), ran a chkdsk /f at reboot and no problems were found on the drive! Go figure! Last week, every day was a problem day... this week we are correcting issues and the corrections/fixes are "sticking".
Regarding the question about backup(s) and frequency, No... 24 hour data loss is not acceptable (IMO) but it *is* tolerable. In this case, this server (and the databases on it) only receive data once a day. They can be millions of records, but still only once a day. If necessary, the automated processes can be re-started to re-process the data. Unlike the customer-facing server(s), this system actually would have the capacity to run backups on a more frequent basis. But hey... it took 3+ years just to get Mgmt to purchase software to compress and speed up the backup(s), never mind the question of where do we put them (capacity)! 😉
Argue for your limitations, and sure enough they're yours (Richard Bach, Illusions)