I came across a DB with thousands of tables.
A process was run where a master stored procedure was run that triggered another 10 stored procs.
Each stored proc generated a dynamically named table, dynamic sql to populate and update the tables, index them etc.
There was some bizarre conditional logic that populated then stripped out certain records from tables, again all dynamic. Each proc was hundreds, if not thousands of lines long.
It took a long time to work out what was going on and what those procs were actually trying to do. I deduced a number of things
- Many of the tables could have been a common shared reference tables.
- The total number of tables could have been 27 partitioned table, not 4,500 and rising.
- The dynamic SQL was only needed because of the table generation approach
- The approach of building orchestration using stored procs should have been handled by an orchestration tool.
- The stored procs could have been more kindly described as "applications"
- About 50% of the code in the stored procs did not need to exist, even if the design flaws were left in place.
- The home built logging solution should have been taken out shot, burnt, its ashes mixed with salt and dumped into the Mariana Trench.
The worst part was the team realised their approach guaranteed them job security and were highly resistant to anything that simplified the process. I now know why incoming CTOs in web companies tend to burn the existing stuff to the ground rather than try and fix it.