I think that is a very per-use case question.
For example, the cost to host a data warehouse is going to be different than a highly OLTP workload.
And there are a lot of questions that would come into play like does your instance do a lot of calculations? If so, you may need more CPU to do the calculations efficiently. Do you have downtime? What I mean is does your company operate 24/7 or 9-5? Having downtime means you have times where the CPU and memory utilization and disk can be set aside for other tasks (antivirus scans, disk defragmentation, etc).
How busy is your SQL instance? If you get 1 transaction per hour vs 1 million transactions per hour will have different costs for disk space and memory and CPU. As for disk space, it again depends on how much data you store and the type of disk you use. if you are running the latest and greatest SSD/Flash technology, it may be very expensive, but if you are running on 5400 spinning platters, the cost per GB is a lot lower at the cost of greatly increased lookup times.
An example of why there isn't a magic "1 size fits all" answer to this - where I work we have a 400 GB database for our ERP system. That has a lot of INSERTs and UPDATEs happening on it daily (200,000 per day on a normal 9-5 day is a very rough estimate). Of that, we capture some audit data in a second database that is JUST inserts with very few other actions ever happening on the audit database. Maybe a few select's per year for support things. The main ERP system, since it has a lot of SELECTS, INSERTS, and UPDATES is going to need more memory and CPU than the audit one. BUT the audit one is likely going to be larger as it is capturing all of the data changes along with who changed it and when. The main database also has an archiving process that runs yearly to remove old, useless data. This doesn't happen on the Audit database. And the Archive gets even less action than the Audit one. It gets one large set of writes that happen per year and otherwise sits pretty much idle.
My opinion, there is not enough information to warrant a guess on the cost analysis as memory and CPU utilization do not scale simply by database size. CPU and Memory will scale much better with number of concurrent users and number of transactions, and even then it won't be a linear scale and it will still vary a bit with use cases. For example, if you are going by concurrent users and you have 100 users connected all running 100 queries at the same time, that will likely use more CPU and Memory than having 1 million users connected but running 0 queries.
On top of all of the above, I think it does depend on your hardware vendors. For example, if you have it running on a self-contained physical server (ie no SAN and bare metal box running SQL) the cost will be different than running it on a SAN and in VMWare. It also depends on what you consider a database lifecycle. I have some databases that are likely to never see EOL and will be running as long as the company keeps running. Yet, I've seen others that only survive a year before we shut them down.