Here’s what I’d say.
For OS/binaries, I’d probably put 100GB there, maybe more for pagefile depending on RAM. Guess high, since the system drive is a pain to alter later.
For SQL binaries, this is minimal, and you might drop the pagefile here. System databases should be fairly small, though depending on history, perhaps these might be in the 10s of GB range.
For user dbs, you need to know/guess data size. I usually try to size the data file for data + 3mo growth, re-evaluating this every (or every other) month. You do need extra space since your data will grow. Hopefully you can estimate a data size for the next year+. I don’t know how easy it is to add space. If it’s easy (you’re on a SAN of some sort), guess low. If not, guess high.
For log files, this is workload dependent. You will be completely guessing if you don’t have any history here. It’s not like a log file is 10% of data. It could be that, could be lower, could be much higher. This is tracking the record of changes, so more changes could b e lots of log, even if data isn’t growing. As a gross (and likely bad) rule of thumb, I’d set my log file at 15-20% of data and then see where I am. Log backups give me an idea of the workload log I’m generating during each period. However, no matter what, I want extra space here. I also like to include placeholders on this drive (And data drive) to get me out of emergencies. https://voiceofthedba.com/2014/11/24/placeholders-for-emergencies/
For backups, you have the idea for fulls, but you might need logs, so ensure you account for this. More log backups is more files, but the aggregate log is roughly the same. If I generate 10GB of log records a day, this could be 4 2.5GB log backups every 4 hours or 20 500GB files if I backup 20 times a day. Again, remember data grows, so I need to account for the fact that 4 full backups today at 100GB might be 4 backups at 150GB in a year.