Thank you very much for the input and ideas.
There are no hard and fast goals that I have to meet regarding the performance.
This process will still run out of hours and as such is not going to have any impact on live processing.
As such, the performance / efficiency improvement is solely to make me happy.
At the moment, my test case that I'm using takes around 3 minutes to drop all of the tables.
If I found a way to drop all 10,000 tables in under 2 minutes (for example) I would feel like i'd achieved something.
I suppose that the bigger issue here, rather than my specific situation was for me to understand if it was at all possible for me to efficiently drop multiple tables.
If I could, there is potential for me to use a similar process with other tasks.
Given that the overall priority of this process is functionality, rather than performance, I'm not planning to spend considerable time attempting to make it more efficient.
I have just realised that in a previous post I had reached the size limit of the parameter.
After reviewing it, I don't believe I have.... the parameter is easily big enough to handle my table names.
I don't know why I thought i'd be hitting a varchar(max) limit.
Thanks for the help